Test Report: KVM_Linux_crio 20319

                    
                      648f194b476483b13df21998417ef6977c25d9d6:2025-01-27:38091
                    
                

Test fail (13/309)

x
+
TestAddons/parallel/Ingress (158.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-952541 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-952541 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-952541 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9fe24c9f-36f9-4f56-b5db-b573fec024ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9fe24c9f-36f9-4f56-b5db-b573fec024ea] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.004046597s
I0127 10:35:23.028824   26072 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-952541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.237831036s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-952541 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.92
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-952541 -n addons-952541
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 logs -n 25: (1.210295569s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-113956                                                                     | download-only-113956 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| delete  | -p download-only-223031                                                                     | download-only-223031 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| delete  | -p download-only-113956                                                                     | download-only-113956 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-963021 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | binary-mirror-963021                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35307                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-963021                                                                     | binary-mirror-963021 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | addons-952541                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | addons-952541                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-952541 --wait=true                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-952541 addons disable                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:34 UTC | 27 Jan 25 10:34 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-952541 addons disable                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:34 UTC | 27 Jan 25 10:34 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:34 UTC | 27 Jan 25 10:34 UTC |
	|         | -p addons-952541                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-952541 addons disable                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:34 UTC | 27 Jan 25 10:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-952541 addons                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:34 UTC | 27 Jan 25 10:34 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-952541 ip                                                                            | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	| addons  | addons-952541 addons disable                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-952541 addons disable                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-952541 addons                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-952541 ssh cat                                                                       | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | /opt/local-path-provisioner/pvc-ecb4fb5f-9049-49d2-a5ca-1bdd762143ef_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-952541 addons disable                                                                | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-952541 addons                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-952541 addons                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-952541 ssh curl -s                                                                   | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-952541 addons                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-952541 addons                                                                        | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:35 UTC | 27 Jan 25 10:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-952541 ip                                                                            | addons-952541        | jenkins | v1.35.0 | 27 Jan 25 10:37 UTC | 27 Jan 25 10:37 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:32:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:32:17.183726   26720 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:32:17.183825   26720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:17.183838   26720 out.go:358] Setting ErrFile to fd 2...
	I0127 10:32:17.183845   26720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:17.184039   26720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 10:32:17.184635   26720 out.go:352] Setting JSON to false
	I0127 10:32:17.185516   26720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4437,"bootTime":1737969500,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:32:17.185603   26720 start.go:139] virtualization: kvm guest
	I0127 10:32:17.187853   26720 out.go:177] * [addons-952541] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 10:32:17.189140   26720 notify.go:220] Checking for updates...
	I0127 10:32:17.189158   26720 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 10:32:17.190392   26720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:32:17.191585   26720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 10:32:17.192636   26720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:32:17.193697   26720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 10:32:17.194743   26720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 10:32:17.195840   26720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:32:17.226503   26720 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 10:32:17.227728   26720 start.go:297] selected driver: kvm2
	I0127 10:32:17.227742   26720 start.go:901] validating driver "kvm2" against <nil>
	I0127 10:32:17.227752   26720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 10:32:17.228466   26720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:32:17.228559   26720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 10:32:17.243359   26720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 10:32:17.243402   26720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 10:32:17.243599   26720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 10:32:17.243641   26720 cni.go:84] Creating CNI manager for ""
	I0127 10:32:17.243681   26720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 10:32:17.243690   26720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 10:32:17.243729   26720 start.go:340] cluster config:
	{Name:addons-952541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-952541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 10:32:17.243813   26720 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:32:17.245539   26720 out.go:177] * Starting "addons-952541" primary control-plane node in "addons-952541" cluster
	I0127 10:32:17.246878   26720 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 10:32:17.246906   26720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 10:32:17.246915   26720 cache.go:56] Caching tarball of preloaded images
	I0127 10:32:17.246970   26720 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 10:32:17.246980   26720 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 10:32:17.247234   26720 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/config.json ...
	I0127 10:32:17.247255   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/config.json: {Name:mkb7bf08f5c573d016da189cdbb2abc7264a16c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:17.247378   26720 start.go:360] acquireMachinesLock for addons-952541: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 10:32:17.247419   26720 start.go:364] duration metric: took 29.618µs to acquireMachinesLock for "addons-952541"
	I0127 10:32:17.247435   26720 start.go:93] Provisioning new machine with config: &{Name:addons-952541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-952541 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 10:32:17.247486   26720 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 10:32:17.249750   26720 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 10:32:17.249868   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:32:17.249921   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:32:17.264126   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0127 10:32:17.264545   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:32:17.265061   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:32:17.265081   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:32:17.265379   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:32:17.265550   26720 main.go:141] libmachine: (addons-952541) Calling .GetMachineName
	I0127 10:32:17.265696   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:17.265836   26720 start.go:159] libmachine.API.Create for "addons-952541" (driver="kvm2")
	I0127 10:32:17.265859   26720 client.go:168] LocalClient.Create starting
	I0127 10:32:17.265891   26720 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem
	I0127 10:32:17.358470   26720 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem
	I0127 10:32:17.521372   26720 main.go:141] libmachine: Running pre-create checks...
	I0127 10:32:17.521393   26720 main.go:141] libmachine: (addons-952541) Calling .PreCreateCheck
	I0127 10:32:17.521866   26720 main.go:141] libmachine: (addons-952541) Calling .GetConfigRaw
	I0127 10:32:17.522270   26720 main.go:141] libmachine: Creating machine...
	I0127 10:32:17.522284   26720 main.go:141] libmachine: (addons-952541) Calling .Create
	I0127 10:32:17.522404   26720 main.go:141] libmachine: (addons-952541) creating KVM machine...
	I0127 10:32:17.522421   26720 main.go:141] libmachine: (addons-952541) creating network...
	I0127 10:32:17.523652   26720 main.go:141] libmachine: (addons-952541) DBG | found existing default KVM network
	I0127 10:32:17.524323   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:17.524194   26743 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I0127 10:32:17.524372   26720 main.go:141] libmachine: (addons-952541) DBG | created network xml: 
	I0127 10:32:17.524390   26720 main.go:141] libmachine: (addons-952541) DBG | <network>
	I0127 10:32:17.524400   26720 main.go:141] libmachine: (addons-952541) DBG |   <name>mk-addons-952541</name>
	I0127 10:32:17.524411   26720 main.go:141] libmachine: (addons-952541) DBG |   <dns enable='no'/>
	I0127 10:32:17.524419   26720 main.go:141] libmachine: (addons-952541) DBG |   
	I0127 10:32:17.524435   26720 main.go:141] libmachine: (addons-952541) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 10:32:17.524448   26720 main.go:141] libmachine: (addons-952541) DBG |     <dhcp>
	I0127 10:32:17.524457   26720 main.go:141] libmachine: (addons-952541) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 10:32:17.524467   26720 main.go:141] libmachine: (addons-952541) DBG |     </dhcp>
	I0127 10:32:17.524473   26720 main.go:141] libmachine: (addons-952541) DBG |   </ip>
	I0127 10:32:17.524482   26720 main.go:141] libmachine: (addons-952541) DBG |   
	I0127 10:32:17.524489   26720 main.go:141] libmachine: (addons-952541) DBG | </network>
	I0127 10:32:17.524503   26720 main.go:141] libmachine: (addons-952541) DBG | 
	I0127 10:32:17.530762   26720 main.go:141] libmachine: (addons-952541) DBG | trying to create private KVM network mk-addons-952541 192.168.39.0/24...
	I0127 10:32:17.593247   26720 main.go:141] libmachine: (addons-952541) DBG | private KVM network mk-addons-952541 192.168.39.0/24 created
	I0127 10:32:17.593287   26720 main.go:141] libmachine: (addons-952541) setting up store path in /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541 ...
	I0127 10:32:17.593302   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:17.593202   26743 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:32:17.593315   26720 main.go:141] libmachine: (addons-952541) building disk image from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 10:32:17.593337   26720 main.go:141] libmachine: (addons-952541) Downloading /home/jenkins/minikube-integration/20319-18835/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 10:32:17.843178   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:17.843038   26743 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa...
	I0127 10:32:18.186557   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:18.186408   26743 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/addons-952541.rawdisk...
	I0127 10:32:18.186592   26720 main.go:141] libmachine: (addons-952541) DBG | Writing magic tar header
	I0127 10:32:18.186606   26720 main.go:141] libmachine: (addons-952541) DBG | Writing SSH key tar header
	I0127 10:32:18.186621   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:18.186516   26743 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541 ...
	I0127 10:32:18.186640   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541
	I0127 10:32:18.186649   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines
	I0127 10:32:18.186662   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:32:18.186670   26720 main.go:141] libmachine: (addons-952541) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541 (perms=drwx------)
	I0127 10:32:18.186677   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835
	I0127 10:32:18.186690   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 10:32:18.186705   26720 main.go:141] libmachine: (addons-952541) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines (perms=drwxr-xr-x)
	I0127 10:32:18.186717   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home/jenkins
	I0127 10:32:18.186728   26720 main.go:141] libmachine: (addons-952541) DBG | checking permissions on dir: /home
	I0127 10:32:18.186736   26720 main.go:141] libmachine: (addons-952541) DBG | skipping /home - not owner
	I0127 10:32:18.186746   26720 main.go:141] libmachine: (addons-952541) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube (perms=drwxr-xr-x)
	I0127 10:32:18.186752   26720 main.go:141] libmachine: (addons-952541) setting executable bit set on /home/jenkins/minikube-integration/20319-18835 (perms=drwxrwxr-x)
	I0127 10:32:18.186766   26720 main.go:141] libmachine: (addons-952541) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 10:32:18.186777   26720 main.go:141] libmachine: (addons-952541) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 10:32:18.186790   26720 main.go:141] libmachine: (addons-952541) creating domain...
	I0127 10:32:18.188060   26720 main.go:141] libmachine: (addons-952541) define libvirt domain using xml: 
	I0127 10:32:18.188095   26720 main.go:141] libmachine: (addons-952541) <domain type='kvm'>
	I0127 10:32:18.188106   26720 main.go:141] libmachine: (addons-952541)   <name>addons-952541</name>
	I0127 10:32:18.188113   26720 main.go:141] libmachine: (addons-952541)   <memory unit='MiB'>4000</memory>
	I0127 10:32:18.188122   26720 main.go:141] libmachine: (addons-952541)   <vcpu>2</vcpu>
	I0127 10:32:18.188129   26720 main.go:141] libmachine: (addons-952541)   <features>
	I0127 10:32:18.188137   26720 main.go:141] libmachine: (addons-952541)     <acpi/>
	I0127 10:32:18.188148   26720 main.go:141] libmachine: (addons-952541)     <apic/>
	I0127 10:32:18.188155   26720 main.go:141] libmachine: (addons-952541)     <pae/>
	I0127 10:32:18.188170   26720 main.go:141] libmachine: (addons-952541)     
	I0127 10:32:18.188182   26720 main.go:141] libmachine: (addons-952541)   </features>
	I0127 10:32:18.188197   26720 main.go:141] libmachine: (addons-952541)   <cpu mode='host-passthrough'>
	I0127 10:32:18.188211   26720 main.go:141] libmachine: (addons-952541)   
	I0127 10:32:18.188220   26720 main.go:141] libmachine: (addons-952541)   </cpu>
	I0127 10:32:18.188228   26720 main.go:141] libmachine: (addons-952541)   <os>
	I0127 10:32:18.188239   26720 main.go:141] libmachine: (addons-952541)     <type>hvm</type>
	I0127 10:32:18.188249   26720 main.go:141] libmachine: (addons-952541)     <boot dev='cdrom'/>
	I0127 10:32:18.188258   26720 main.go:141] libmachine: (addons-952541)     <boot dev='hd'/>
	I0127 10:32:18.188266   26720 main.go:141] libmachine: (addons-952541)     <bootmenu enable='no'/>
	I0127 10:32:18.188276   26720 main.go:141] libmachine: (addons-952541)   </os>
	I0127 10:32:18.188284   26720 main.go:141] libmachine: (addons-952541)   <devices>
	I0127 10:32:18.188300   26720 main.go:141] libmachine: (addons-952541)     <disk type='file' device='cdrom'>
	I0127 10:32:18.188312   26720 main.go:141] libmachine: (addons-952541)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/boot2docker.iso'/>
	I0127 10:32:18.188323   26720 main.go:141] libmachine: (addons-952541)       <target dev='hdc' bus='scsi'/>
	I0127 10:32:18.188331   26720 main.go:141] libmachine: (addons-952541)       <readonly/>
	I0127 10:32:18.188340   26720 main.go:141] libmachine: (addons-952541)     </disk>
	I0127 10:32:18.188349   26720 main.go:141] libmachine: (addons-952541)     <disk type='file' device='disk'>
	I0127 10:32:18.188361   26720 main.go:141] libmachine: (addons-952541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 10:32:18.188380   26720 main.go:141] libmachine: (addons-952541)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/addons-952541.rawdisk'/>
	I0127 10:32:18.188395   26720 main.go:141] libmachine: (addons-952541)       <target dev='hda' bus='virtio'/>
	I0127 10:32:18.188420   26720 main.go:141] libmachine: (addons-952541)     </disk>
	I0127 10:32:18.188440   26720 main.go:141] libmachine: (addons-952541)     <interface type='network'>
	I0127 10:32:18.188447   26720 main.go:141] libmachine: (addons-952541)       <source network='mk-addons-952541'/>
	I0127 10:32:18.188455   26720 main.go:141] libmachine: (addons-952541)       <model type='virtio'/>
	I0127 10:32:18.188460   26720 main.go:141] libmachine: (addons-952541)     </interface>
	I0127 10:32:18.188467   26720 main.go:141] libmachine: (addons-952541)     <interface type='network'>
	I0127 10:32:18.188473   26720 main.go:141] libmachine: (addons-952541)       <source network='default'/>
	I0127 10:32:18.188482   26720 main.go:141] libmachine: (addons-952541)       <model type='virtio'/>
	I0127 10:32:18.188509   26720 main.go:141] libmachine: (addons-952541)     </interface>
	I0127 10:32:18.188527   26720 main.go:141] libmachine: (addons-952541)     <serial type='pty'>
	I0127 10:32:18.188545   26720 main.go:141] libmachine: (addons-952541)       <target port='0'/>
	I0127 10:32:18.188561   26720 main.go:141] libmachine: (addons-952541)     </serial>
	I0127 10:32:18.188581   26720 main.go:141] libmachine: (addons-952541)     <console type='pty'>
	I0127 10:32:18.188592   26720 main.go:141] libmachine: (addons-952541)       <target type='serial' port='0'/>
	I0127 10:32:18.188603   26720 main.go:141] libmachine: (addons-952541)     </console>
	I0127 10:32:18.188612   26720 main.go:141] libmachine: (addons-952541)     <rng model='virtio'>
	I0127 10:32:18.188625   26720 main.go:141] libmachine: (addons-952541)       <backend model='random'>/dev/random</backend>
	I0127 10:32:18.188635   26720 main.go:141] libmachine: (addons-952541)     </rng>
	I0127 10:32:18.188646   26720 main.go:141] libmachine: (addons-952541)     
	I0127 10:32:18.188658   26720 main.go:141] libmachine: (addons-952541)     
	I0127 10:32:18.188668   26720 main.go:141] libmachine: (addons-952541)   </devices>
	I0127 10:32:18.188678   26720 main.go:141] libmachine: (addons-952541) </domain>
	I0127 10:32:18.188690   26720 main.go:141] libmachine: (addons-952541) 
	I0127 10:32:18.194473   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:80:43:b4 in network default
	I0127 10:32:18.195001   26720 main.go:141] libmachine: (addons-952541) starting domain...
	I0127 10:32:18.195023   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:18.195032   26720 main.go:141] libmachine: (addons-952541) ensuring networks are active...
	I0127 10:32:18.195566   26720 main.go:141] libmachine: (addons-952541) Ensuring network default is active
	I0127 10:32:18.195823   26720 main.go:141] libmachine: (addons-952541) Ensuring network mk-addons-952541 is active
	I0127 10:32:18.196235   26720 main.go:141] libmachine: (addons-952541) getting domain XML...
	I0127 10:32:18.196798   26720 main.go:141] libmachine: (addons-952541) creating domain...
	I0127 10:32:19.571990   26720 main.go:141] libmachine: (addons-952541) waiting for IP...
	I0127 10:32:19.572803   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:19.573166   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:19.573235   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:19.573182   26743 retry.go:31] will retry after 269.352548ms: waiting for domain to come up
	I0127 10:32:19.844633   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:19.845030   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:19.845055   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:19.845014   26743 retry.go:31] will retry after 330.692073ms: waiting for domain to come up
	I0127 10:32:20.177483   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:20.177845   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:20.177872   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:20.177832   26743 retry.go:31] will retry after 430.763139ms: waiting for domain to come up
	I0127 10:32:20.610437   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:20.610786   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:20.610832   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:20.610792   26743 retry.go:31] will retry after 489.121618ms: waiting for domain to come up
	I0127 10:32:21.101393   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:21.101782   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:21.101830   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:21.101768   26743 retry.go:31] will retry after 669.324461ms: waiting for domain to come up
	I0127 10:32:21.772923   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:21.773514   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:21.773572   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:21.773516   26743 retry.go:31] will retry after 872.796095ms: waiting for domain to come up
	I0127 10:32:22.648381   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:22.648838   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:22.648875   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:22.648819   26743 retry.go:31] will retry after 957.797872ms: waiting for domain to come up
	I0127 10:32:23.607865   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:23.608278   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:23.608305   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:23.608227   26743 retry.go:31] will retry after 1.059445573s: waiting for domain to come up
	I0127 10:32:24.669406   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:24.669828   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:24.669851   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:24.669795   26743 retry.go:31] will retry after 1.704120851s: waiting for domain to come up
	I0127 10:32:26.376403   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:26.376846   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:26.376869   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:26.376820   26743 retry.go:31] will retry after 1.641316801s: waiting for domain to come up
	I0127 10:32:28.020343   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:28.020841   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:28.020877   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:28.020804   26743 retry.go:31] will retry after 2.057247936s: waiting for domain to come up
	I0127 10:32:30.080838   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:30.081186   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:30.081214   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:30.081154   26743 retry.go:31] will retry after 3.01266785s: waiting for domain to come up
	I0127 10:32:33.095847   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:33.096270   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:33.096295   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:33.096230   26743 retry.go:31] will retry after 3.267852037s: waiting for domain to come up
	I0127 10:32:36.365662   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:36.365923   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find current IP address of domain addons-952541 in network mk-addons-952541
	I0127 10:32:36.365958   26720 main.go:141] libmachine: (addons-952541) DBG | I0127 10:32:36.365892   26743 retry.go:31] will retry after 4.076143252s: waiting for domain to come up
	I0127 10:32:40.445461   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.445861   26720 main.go:141] libmachine: (addons-952541) found domain IP: 192.168.39.92
	I0127 10:32:40.445885   26720 main.go:141] libmachine: (addons-952541) reserving static IP address...
	I0127 10:32:40.445898   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has current primary IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.446292   26720 main.go:141] libmachine: (addons-952541) DBG | unable to find host DHCP lease matching {name: "addons-952541", mac: "52:54:00:e7:39:b2", ip: "192.168.39.92"} in network mk-addons-952541
	I0127 10:32:40.514951   26720 main.go:141] libmachine: (addons-952541) reserved static IP address 192.168.39.92 for domain addons-952541
	I0127 10:32:40.514977   26720 main.go:141] libmachine: (addons-952541) waiting for SSH...
	I0127 10:32:40.514986   26720 main.go:141] libmachine: (addons-952541) DBG | Getting to WaitForSSH function...
	I0127 10:32:40.517398   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.517747   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:40.517774   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.517914   26720 main.go:141] libmachine: (addons-952541) DBG | Using SSH client type: external
	I0127 10:32:40.517939   26720 main.go:141] libmachine: (addons-952541) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa (-rw-------)
	I0127 10:32:40.517970   26720 main.go:141] libmachine: (addons-952541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 10:32:40.517981   26720 main.go:141] libmachine: (addons-952541) DBG | About to run SSH command:
	I0127 10:32:40.517990   26720 main.go:141] libmachine: (addons-952541) DBG | exit 0
	I0127 10:32:40.647464   26720 main.go:141] libmachine: (addons-952541) DBG | SSH cmd err, output: <nil>: 
	I0127 10:32:40.647750   26720 main.go:141] libmachine: (addons-952541) KVM machine creation complete
	I0127 10:32:40.648057   26720 main.go:141] libmachine: (addons-952541) Calling .GetConfigRaw
	I0127 10:32:40.648607   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:40.648774   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:40.648949   26720 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 10:32:40.648965   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:32:40.650124   26720 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 10:32:40.650138   26720 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 10:32:40.650144   26720 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 10:32:40.650150   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:40.652459   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.652805   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:40.652829   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.652978   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:40.653166   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:40.653342   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:40.653532   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:40.653726   26720 main.go:141] libmachine: Using SSH client type: native
	I0127 10:32:40.653938   26720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0127 10:32:40.653950   26720 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 10:32:40.762485   26720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 10:32:40.762506   26720 main.go:141] libmachine: Detecting the provisioner...
	I0127 10:32:40.762512   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:40.765232   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.765516   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:40.765542   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.765680   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:40.765856   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:40.766016   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:40.766132   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:40.766274   26720 main.go:141] libmachine: Using SSH client type: native
	I0127 10:32:40.766426   26720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0127 10:32:40.766436   26720 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 10:32:40.875937   26720 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 10:32:40.876002   26720 main.go:141] libmachine: found compatible host: buildroot
	I0127 10:32:40.876010   26720 main.go:141] libmachine: Provisioning with buildroot...
	I0127 10:32:40.876016   26720 main.go:141] libmachine: (addons-952541) Calling .GetMachineName
	I0127 10:32:40.876274   26720 buildroot.go:166] provisioning hostname "addons-952541"
	I0127 10:32:40.876292   26720 main.go:141] libmachine: (addons-952541) Calling .GetMachineName
	I0127 10:32:40.876489   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:40.879008   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.879357   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:40.879386   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:40.879557   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:40.879750   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:40.879916   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:40.880079   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:40.880206   26720 main.go:141] libmachine: Using SSH client type: native
	I0127 10:32:40.880364   26720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0127 10:32:40.880378   26720 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-952541 && echo "addons-952541" | sudo tee /etc/hostname
	I0127 10:32:41.004308   26720 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952541
	
	I0127 10:32:41.004345   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.006915   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.007265   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.007291   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.007571   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.007822   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.007990   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.008094   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.008232   26720 main.go:141] libmachine: Using SSH client type: native
	I0127 10:32:41.008390   26720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0127 10:32:41.008406   26720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-952541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-952541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-952541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 10:32:41.127347   26720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 10:32:41.127371   26720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 10:32:41.127389   26720 buildroot.go:174] setting up certificates
	I0127 10:32:41.127399   26720 provision.go:84] configureAuth start
	I0127 10:32:41.127407   26720 main.go:141] libmachine: (addons-952541) Calling .GetMachineName
	I0127 10:32:41.127675   26720 main.go:141] libmachine: (addons-952541) Calling .GetIP
	I0127 10:32:41.130492   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.130790   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.130829   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.130969   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.133218   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.133486   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.133516   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.133607   26720 provision.go:143] copyHostCerts
	I0127 10:32:41.133694   26720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 10:32:41.133814   26720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 10:32:41.133894   26720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 10:32:41.133957   26720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.addons-952541 san=[127.0.0.1 192.168.39.92 addons-952541 localhost minikube]
	I0127 10:32:41.250290   26720 provision.go:177] copyRemoteCerts
	I0127 10:32:41.250340   26720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 10:32:41.250360   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.252999   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.253386   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.253412   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.253613   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.253781   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.253915   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.254040   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:32:41.337234   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 10:32:41.359469   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 10:32:41.381427   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 10:32:41.402144   26720 provision.go:87] duration metric: took 274.734982ms to configureAuth
	I0127 10:32:41.402168   26720 buildroot.go:189] setting minikube options for container-runtime
	I0127 10:32:41.402325   26720 config.go:182] Loaded profile config "addons-952541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 10:32:41.402392   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.404775   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.405104   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.405132   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.405341   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.405527   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.405714   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.405843   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.406006   26720 main.go:141] libmachine: Using SSH client type: native
	I0127 10:32:41.406149   26720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0127 10:32:41.406162   26720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 10:32:41.623710   26720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 10:32:41.623745   26720 main.go:141] libmachine: Checking connection to Docker...
	I0127 10:32:41.623754   26720 main.go:141] libmachine: (addons-952541) Calling .GetURL
	I0127 10:32:41.624952   26720 main.go:141] libmachine: (addons-952541) DBG | using libvirt version 6000000
	I0127 10:32:41.627129   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.627533   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.627555   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.627787   26720 main.go:141] libmachine: Docker is up and running!
	I0127 10:32:41.627804   26720 main.go:141] libmachine: Reticulating splines...
	I0127 10:32:41.627811   26720 client.go:171] duration metric: took 24.361944794s to LocalClient.Create
	I0127 10:32:41.627837   26720 start.go:167] duration metric: took 24.362013151s to libmachine.API.Create "addons-952541"
	I0127 10:32:41.627856   26720 start.go:293] postStartSetup for "addons-952541" (driver="kvm2")
	I0127 10:32:41.627870   26720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 10:32:41.627891   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:41.628111   26720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 10:32:41.628141   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.630514   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.630846   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.630875   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.631016   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.631173   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.631385   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.631547   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:32:41.717188   26720 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 10:32:41.720808   26720 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 10:32:41.720826   26720 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 10:32:41.720885   26720 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 10:32:41.720907   26720 start.go:296] duration metric: took 93.042143ms for postStartSetup
	I0127 10:32:41.720936   26720 main.go:141] libmachine: (addons-952541) Calling .GetConfigRaw
	I0127 10:32:41.721478   26720 main.go:141] libmachine: (addons-952541) Calling .GetIP
	I0127 10:32:41.723995   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.724338   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.724363   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.724660   26720 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/config.json ...
	I0127 10:32:41.724818   26720 start.go:128] duration metric: took 24.477323942s to createHost
	I0127 10:32:41.724839   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.727162   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.727543   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.727576   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.727707   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.727886   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.728008   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.728147   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.728287   26720 main.go:141] libmachine: Using SSH client type: native
	I0127 10:32:41.728479   26720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0127 10:32:41.728495   26720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 10:32:41.839627   26720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737973961.815032056
	
	I0127 10:32:41.839646   26720 fix.go:216] guest clock: 1737973961.815032056
	I0127 10:32:41.839653   26720 fix.go:229] Guest: 2025-01-27 10:32:41.815032056 +0000 UTC Remote: 2025-01-27 10:32:41.724828405 +0000 UTC m=+24.576060792 (delta=90.203651ms)
	I0127 10:32:41.839669   26720 fix.go:200] guest clock delta is within tolerance: 90.203651ms
	I0127 10:32:41.839673   26720 start.go:83] releasing machines lock for "addons-952541", held for 24.592245838s
	I0127 10:32:41.839730   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:41.839974   26720 main.go:141] libmachine: (addons-952541) Calling .GetIP
	I0127 10:32:41.842452   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.842838   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.842862   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.843059   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:41.843527   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:41.843723   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:32:41.843815   26720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 10:32:41.843859   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.843912   26720 ssh_runner.go:195] Run: cat /version.json
	I0127 10:32:41.843930   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:32:41.846476   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.846644   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.846767   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.846791   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.847038   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.847050   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:41.847063   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:41.847233   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:32:41.847237   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.847420   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:32:41.847451   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.847567   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:32:41.847566   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:32:41.847710   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:32:41.950291   26720 ssh_runner.go:195] Run: systemctl --version
	I0127 10:32:41.955803   26720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 10:32:42.113287   26720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 10:32:42.118730   26720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 10:32:42.118785   26720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 10:32:42.132944   26720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 10:32:42.132965   26720 start.go:495] detecting cgroup driver to use...
	I0127 10:32:42.133021   26720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 10:32:42.148532   26720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 10:32:42.161885   26720 docker.go:217] disabling cri-docker service (if available) ...
	I0127 10:32:42.161947   26720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 10:32:42.175192   26720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 10:32:42.188708   26720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 10:32:42.294731   26720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 10:32:42.441368   26720 docker.go:233] disabling docker service ...
	I0127 10:32:42.441452   26720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 10:32:42.454915   26720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 10:32:42.467042   26720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 10:32:42.577789   26720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 10:32:42.684584   26720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 10:32:42.697540   26720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 10:32:42.714191   26720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 10:32:42.714241   26720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.723458   26720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 10:32:42.723513   26720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.733816   26720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.743653   26720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.753990   26720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 10:32:42.763628   26720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.772684   26720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.788393   26720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 10:32:42.797577   26720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 10:32:42.805751   26720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 10:32:42.805788   26720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 10:32:42.818341   26720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 10:32:42.827495   26720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 10:32:42.936838   26720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 10:32:43.033024   26720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 10:32:43.033101   26720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 10:32:43.038235   26720 start.go:563] Will wait 60s for crictl version
	I0127 10:32:43.038293   26720 ssh_runner.go:195] Run: which crictl
	I0127 10:32:43.042079   26720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 10:32:43.086892   26720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 10:32:43.086994   26720 ssh_runner.go:195] Run: crio --version
	I0127 10:32:43.115707   26720 ssh_runner.go:195] Run: crio --version
	I0127 10:32:43.278918   26720 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 10:32:43.340331   26720 main.go:141] libmachine: (addons-952541) Calling .GetIP
	I0127 10:32:43.343627   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:43.344043   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:32:43.344065   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:32:43.344349   26720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 10:32:43.348525   26720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 10:32:43.360544   26720 kubeadm.go:883] updating cluster {Name:addons-952541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-952541 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 10:32:43.360640   26720 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 10:32:43.360676   26720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 10:32:43.399733   26720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 10:32:43.399789   26720 ssh_runner.go:195] Run: which lz4
	I0127 10:32:43.403680   26720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 10:32:43.407654   26720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 10:32:43.407683   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 10:32:44.555579   26720 crio.go:462] duration metric: took 1.151926166s to copy over tarball
	I0127 10:32:44.555656   26720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 10:32:46.631427   26720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.075738508s)
	I0127 10:32:46.631464   26720 crio.go:469] duration metric: took 2.075858034s to extract the tarball
	I0127 10:32:46.631474   26720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 10:32:46.667726   26720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 10:32:46.705098   26720 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 10:32:46.705118   26720 cache_images.go:84] Images are preloaded, skipping loading
	I0127 10:32:46.705125   26720 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.32.1 crio true true} ...
	I0127 10:32:46.705244   26720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-952541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-952541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 10:32:46.705312   26720 ssh_runner.go:195] Run: crio config
	I0127 10:32:46.749790   26720 cni.go:84] Creating CNI manager for ""
	I0127 10:32:46.749813   26720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 10:32:46.749823   26720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 10:32:46.749843   26720 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-952541 NodeName:addons-952541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 10:32:46.749986   26720 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-952541"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 10:32:46.750050   26720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 10:32:46.759167   26720 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 10:32:46.759233   26720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 10:32:46.767769   26720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 10:32:46.782586   26720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 10:32:46.798120   26720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 10:32:46.813165   26720 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0127 10:32:46.816603   26720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 10:32:46.827542   26720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 10:32:46.945096   26720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 10:32:46.961434   26720 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541 for IP: 192.168.39.92
	I0127 10:32:46.961455   26720 certs.go:194] generating shared ca certs ...
	I0127 10:32:46.961477   26720 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:46.961630   26720 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 10:32:47.333685   26720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt ...
	I0127 10:32:47.333713   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt: {Name:mk363adfcc4f3b8178e34a02d0f202c4932c519c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:47.333913   26720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key ...
	I0127 10:32:47.333929   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key: {Name:mkc28766c587de84462eb62ed2b65b172a0ded14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:47.334023   26720 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 10:32:47.717883   26720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt ...
	I0127 10:32:47.717913   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt: {Name:mk8d8ed80a4224655adebae52082b9bf02cd83e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:47.718091   26720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key ...
	I0127 10:32:47.718110   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key: {Name:mk7f1b9262236d2e1f1bd3c4c0b516c6d690eb47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:47.718198   26720 certs.go:256] generating profile certs ...
	I0127 10:32:47.718274   26720 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.key
	I0127 10:32:47.718291   26720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt with IP's: []
	I0127 10:32:47.855311   26720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt ...
	I0127 10:32:47.855340   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: {Name:mk62d98cab209d7f8045604daeab9972eba24002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:47.855519   26720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.key ...
	I0127 10:32:47.855539   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.key: {Name:mk12fb538efb3c8a5d2246d4f5f9a3d0ed427db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:47.855662   26720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.key.562d83ef
	I0127 10:32:47.855687   26720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.crt.562d83ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92]
	I0127 10:32:48.008458   26720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.crt.562d83ef ...
	I0127 10:32:48.008484   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.crt.562d83ef: {Name:mk4215f5f3b934cd60648d1262c662e5a52fc13a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:48.008670   26720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.key.562d83ef ...
	I0127 10:32:48.008690   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.key.562d83ef: {Name:mkbf52377847c6cb7cf1654b09e25cefc35acab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:48.008784   26720 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.crt.562d83ef -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.crt
	I0127 10:32:48.008875   26720 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.key.562d83ef -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.key
	I0127 10:32:48.008941   26720 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.key
	I0127 10:32:48.008966   26720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.crt with IP's: []
	I0127 10:32:48.134677   26720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.crt ...
	I0127 10:32:48.134705   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.crt: {Name:mka613a6e7ca1675312e9461a262265e6c9a35b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:48.134881   26720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.key ...
	I0127 10:32:48.134898   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.key: {Name:mkccff14a7eb1a0a46d4cd6c9ee2cb0bf98427f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:32:48.135101   26720 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 10:32:48.135141   26720 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 10:32:48.135177   26720 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 10:32:48.135209   26720 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 10:32:48.135849   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 10:32:48.162320   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 10:32:48.184404   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 10:32:48.206273   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 10:32:48.227730   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 10:32:48.250294   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 10:32:48.271724   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 10:32:48.292449   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 10:32:48.314661   26720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 10:32:48.340895   26720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 10:32:48.359067   26720 ssh_runner.go:195] Run: openssl version
	I0127 10:32:48.364971   26720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 10:32:48.376016   26720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 10:32:48.380528   26720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 10:32:48.380580   26720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 10:32:48.386388   26720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 10:32:48.397197   26720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 10:32:48.401491   26720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 10:32:48.401534   26720 kubeadm.go:392] StartCluster: {Name:addons-952541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-952541 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:32:48.401602   26720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 10:32:48.401638   26720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 10:32:48.439839   26720 cri.go:89] found id: ""
	I0127 10:32:48.439914   26720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 10:32:48.450320   26720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 10:32:48.460532   26720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 10:32:48.470678   26720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 10:32:48.470723   26720 kubeadm.go:157] found existing configuration files:
	
	I0127 10:32:48.470772   26720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 10:32:48.480486   26720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 10:32:48.480533   26720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 10:32:48.489844   26720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 10:32:48.498096   26720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 10:32:48.498154   26720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 10:32:48.506473   26720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 10:32:48.520725   26720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 10:32:48.520769   26720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 10:32:48.536074   26720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 10:32:48.546058   26720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 10:32:48.546114   26720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 10:32:48.562825   26720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 10:32:48.613503   26720 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 10:32:48.613625   26720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 10:32:48.704685   26720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 10:32:48.704807   26720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 10:32:48.704952   26720 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 10:32:48.718274   26720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 10:32:48.995689   26720 out.go:235]   - Generating certificates and keys ...
	I0127 10:32:48.995812   26720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 10:32:48.995898   26720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 10:32:48.995981   26720 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 10:32:48.996083   26720 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 10:32:49.142839   26720 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 10:32:49.438023   26720 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 10:32:49.606198   26720 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 10:32:49.606336   26720 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-952541 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0127 10:32:49.784005   26720 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 10:32:49.784145   26720 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-952541 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0127 10:32:49.874044   26720 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 10:32:49.944784   26720 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 10:32:50.174871   26720 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 10:32:50.174943   26720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 10:32:50.270533   26720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 10:32:50.737836   26720 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 10:32:50.982761   26720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 10:32:51.162910   26720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 10:32:51.303649   26720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 10:32:51.304144   26720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 10:32:51.306407   26720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 10:32:51.308318   26720 out.go:235]   - Booting up control plane ...
	I0127 10:32:51.308437   26720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 10:32:51.308559   26720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 10:32:51.310197   26720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 10:32:51.326446   26720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 10:32:51.333605   26720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 10:32:51.333682   26720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 10:32:51.454629   26720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 10:32:51.454796   26720 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 10:32:52.454815   26720 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001242813s
	I0127 10:32:52.454900   26720 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 10:32:56.454879   26720 kubeadm.go:310] [api-check] The API server is healthy after 4.000983431s
	I0127 10:32:56.464795   26720 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 10:32:56.480229   26720 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 10:32:56.510531   26720 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 10:32:56.510784   26720 kubeadm.go:310] [mark-control-plane] Marking the node addons-952541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 10:32:56.520854   26720 kubeadm.go:310] [bootstrap-token] Using token: jy7rfd.bq482nfi9l322q7n
	I0127 10:32:56.522352   26720 out.go:235]   - Configuring RBAC rules ...
	I0127 10:32:56.522462   26720 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 10:32:56.528791   26720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 10:32:56.534875   26720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 10:32:56.538052   26720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 10:32:56.541604   26720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 10:32:56.544018   26720 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 10:32:56.860117   26720 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 10:32:57.286129   26720 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 10:32:57.858150   26720 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 10:32:57.859047   26720 kubeadm.go:310] 
	I0127 10:32:57.859128   26720 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 10:32:57.859159   26720 kubeadm.go:310] 
	I0127 10:32:57.859288   26720 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 10:32:57.859317   26720 kubeadm.go:310] 
	I0127 10:32:57.859365   26720 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 10:32:57.859448   26720 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 10:32:57.859518   26720 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 10:32:57.859527   26720 kubeadm.go:310] 
	I0127 10:32:57.859639   26720 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 10:32:57.859656   26720 kubeadm.go:310] 
	I0127 10:32:57.859726   26720 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 10:32:57.859736   26720 kubeadm.go:310] 
	I0127 10:32:57.859809   26720 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 10:32:57.859909   26720 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 10:32:57.859999   26720 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 10:32:57.860009   26720 kubeadm.go:310] 
	I0127 10:32:57.860114   26720 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 10:32:57.860212   26720 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 10:32:57.860225   26720 kubeadm.go:310] 
	I0127 10:32:57.860343   26720 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jy7rfd.bq482nfi9l322q7n \
	I0127 10:32:57.860482   26720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 10:32:57.860514   26720 kubeadm.go:310] 	--control-plane 
	I0127 10:32:57.860524   26720 kubeadm.go:310] 
	I0127 10:32:57.860667   26720 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 10:32:57.860682   26720 kubeadm.go:310] 
	I0127 10:32:57.860785   26720 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jy7rfd.bq482nfi9l322q7n \
	I0127 10:32:57.860924   26720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 10:32:57.861473   26720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 10:32:57.861507   26720 cni.go:84] Creating CNI manager for ""
	I0127 10:32:57.861524   26720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 10:32:57.863381   26720 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 10:32:57.864844   26720 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 10:32:57.876561   26720 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 10:32:57.892772   26720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 10:32:57.892832   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:32:57.892853   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-952541 minikube.k8s.io/updated_at=2025_01_27T10_32_57_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=addons-952541 minikube.k8s.io/primary=true
	I0127 10:32:57.909498   26720 ops.go:34] apiserver oom_adj: -16
	I0127 10:32:58.018644   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:32:58.519505   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:32:59.019479   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:32:59.519122   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:33:00.019703   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:33:00.519648   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:33:01.019188   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:33:01.519313   26720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 10:33:01.594666   26720 kubeadm.go:1113] duration metric: took 3.70186752s to wait for elevateKubeSystemPrivileges
	I0127 10:33:01.594705   26720 kubeadm.go:394] duration metric: took 13.193174768s to StartCluster
	I0127 10:33:01.594729   26720 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:33:01.594865   26720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 10:33:01.595250   26720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:33:01.595467   26720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 10:33:01.595472   26720 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 10:33:01.595548   26720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 10:33:01.595679   26720 addons.go:69] Setting default-storageclass=true in profile "addons-952541"
	I0127 10:33:01.595698   26720 addons.go:69] Setting yakd=true in profile "addons-952541"
	I0127 10:33:01.595693   26720 addons.go:69] Setting cloud-spanner=true in profile "addons-952541"
	I0127 10:33:01.595704   26720 addons.go:69] Setting ingress-dns=true in profile "addons-952541"
	I0127 10:33:01.595719   26720 config.go:182] Loaded profile config "addons-952541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 10:33:01.595729   26720 addons.go:238] Setting addon yakd=true in "addons-952541"
	I0127 10:33:01.595734   26720 addons.go:238] Setting addon ingress-dns=true in "addons-952541"
	I0127 10:33:01.595735   26720 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-952541"
	I0127 10:33:01.595740   26720 addons.go:69] Setting inspektor-gadget=true in profile "addons-952541"
	I0127 10:33:01.595748   26720 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-952541"
	I0127 10:33:01.595754   26720 addons.go:238] Setting addon inspektor-gadget=true in "addons-952541"
	I0127 10:33:01.595754   26720 addons.go:69] Setting metrics-server=true in profile "addons-952541"
	I0127 10:33:01.595761   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.595723   26720 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-952541"
	I0127 10:33:01.595772   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.595768   26720 addons.go:238] Setting addon metrics-server=true in "addons-952541"
	I0127 10:33:01.595774   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.595778   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.595808   26720 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-952541"
	I0127 10:33:01.595812   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.595837   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.595741   26720 addons.go:238] Setting addon cloud-spanner=true in "addons-952541"
	I0127 10:33:01.595876   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.596130   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596166   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.595723   26720 addons.go:69] Setting registry=true in profile "addons-952541"
	I0127 10:33:01.596206   26720 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-952541"
	I0127 10:33:01.596210   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596219   26720 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-952541"
	I0127 10:33:01.596221   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596230   26720 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-952541"
	I0127 10:33:01.596244   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596252   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596257   26720 addons.go:69] Setting storage-provisioner=true in profile "addons-952541"
	I0127 10:33:01.596219   26720 addons.go:238] Setting addon registry=true in "addons-952541"
	I0127 10:33:01.596269   26720 addons.go:238] Setting addon storage-provisioner=true in "addons-952541"
	I0127 10:33:01.596283   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596360   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596248   26720 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-952541"
	I0127 10:33:01.596499   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.596613   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596650   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596267   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596805   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596859   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596863   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596932   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.596290   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.596299   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.596299   26720 addons.go:69] Setting ingress=true in profile "addons-952541"
	I0127 10:33:01.597399   26720 addons.go:238] Setting addon ingress=true in "addons-952541"
	I0127 10:33:01.597406   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.597433   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.597434   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.595712   26720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-952541"
	I0127 10:33:01.596310   26720 addons.go:69] Setting volcano=true in profile "addons-952541"
	I0127 10:33:01.597536   26720 addons.go:238] Setting addon volcano=true in "addons-952541"
	I0127 10:33:01.597565   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.596319   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.597629   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596319   26720 addons.go:69] Setting gcp-auth=true in profile "addons-952541"
	I0127 10:33:01.597699   26720 mustload.go:65] Loading cluster: addons-952541
	I0127 10:33:01.597799   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.597830   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.597857   26720 config.go:182] Loaded profile config "addons-952541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 10:33:01.597866   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.597892   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.597904   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.597928   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596320   26720 addons.go:69] Setting volumesnapshots=true in profile "addons-952541"
	I0127 10:33:01.598108   26720 addons.go:238] Setting addon volumesnapshots=true in "addons-952541"
	I0127 10:33:01.598138   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.598537   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.598574   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.596994   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.597382   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.598935   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.600546   26720 out.go:177] * Verifying Kubernetes components...
	I0127 10:33:01.602181   26720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 10:33:01.617506   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0127 10:33:01.617558   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I0127 10:33:01.617754   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I0127 10:33:01.618050   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.618168   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.618229   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.618316   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0127 10:33:01.618414   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0127 10:33:01.618644   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.618688   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.618823   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.618841   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.618901   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.619103   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.619192   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.619386   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.619401   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.619532   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.619546   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.620215   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.620409   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.627469   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0127 10:33:01.628058   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.628103   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.628376   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.628417   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.628876   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.628901   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.630504   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0127 10:33:01.630604   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
	I0127 10:33:01.630661   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0127 10:33:01.628064   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.630824   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.635697   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.635767   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.635828   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.635897   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.635894   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.628064   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.635967   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.636721   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.636740   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.636874   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.636886   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.637011   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.637021   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.637144   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.637164   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.637281   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.637296   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.637344   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.637392   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.637411   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.637489   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.637928   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.637962   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.638195   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.638614   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.638650   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.638733   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.643417   26720 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-952541"
	I0127 10:33:01.643462   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.643852   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.643885   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.652174   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0127 10:33:01.652848   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0127 10:33:01.652970   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.653259   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.653711   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.653727   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.653912   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.653924   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.654014   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.654550   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.654590   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.654802   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.654810   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I0127 10:33:01.655156   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.655772   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.655788   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.656145   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.656699   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.656738   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.661936   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0127 10:33:01.662346   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.662858   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.662874   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.663169   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.663291   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.664235   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.664288   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.666312   26720 addons.go:238] Setting addon default-storageclass=true in "addons-952541"
	I0127 10:33:01.666352   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.666704   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.666739   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.666937   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41351
	I0127 10:33:01.667308   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.667742   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.667760   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.668084   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.668246   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.669574   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.669623   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.670359   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.670402   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.674174   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:01.674576   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.674614   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.675117   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0127 10:33:01.675518   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.676158   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.676177   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.676526   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.677004   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.677044   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.681853   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0127 10:33:01.682154   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
	I0127 10:33:01.682386   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.682957   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.682984   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.683334   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.683630   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.683709   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.683796   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0127 10:33:01.684101   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.684209   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.684254   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.684708   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.684777   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.684798   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.684881   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.685165   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.685413   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.687050   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.687105   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.687557   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.689401   26720 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 10:33:01.689421   26720 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 10:33:01.689403   26720 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 10:33:01.691176   26720 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 10:33:01.691193   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 10:33:01.691212   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.691345   26720 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 10:33:01.691355   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 10:33:01.691370   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.691412   26720 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 10:33:01.691420   26720 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 10:33:01.691433   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.693605   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I0127 10:33:01.693750   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0127 10:33:01.694111   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.694633   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.694650   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.695006   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.695299   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.695760   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I0127 10:33:01.696113   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.696550   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0127 10:33:01.696810   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.696823   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.696888   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.697286   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.697564   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.697582   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.698019   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.698050   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.698283   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.698361   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.698688   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.699919   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.700078   26720 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 10:33:01.700991   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.701043   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.701355   26720 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 10:33:01.701372   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 10:33:01.701391   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.701674   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.701705   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.701813   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.701823   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.701883   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.702023   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.702119   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.702124   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.702195   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.702543   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.702601   26720 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 10:33:01.703458   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.704406   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.704427   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.704954   26720 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 10:33:01.705101   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.705122   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.705150   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.705324   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.705428   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.705599   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.705615   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.705540   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.705932   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.706143   26720 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 10:33:01.706163   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 10:33:01.706177   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.706181   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.706347   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.706453   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0127 10:33:01.706547   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.706926   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.707505   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.707521   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.707580   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.707597   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.707764   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.707927   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.708148   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.708215   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.708257   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.708808   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.708810   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.710099   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.710510   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.710533   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.710654   26720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 10:33:01.710684   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.711094   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.711249   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.711421   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.711798   26720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 10:33:01.711814   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 10:33:01.711831   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.715819   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.716149   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.716179   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.716450   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.716668   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.716812   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.716952   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.719418   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I0127 10:33:01.719882   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.720348   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.720462   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.720473   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.720531   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:01.720551   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:01.720734   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:01.720748   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:01.720758   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:01.720761   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:01.720766   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:01.720818   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.721018   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.721060   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:01.721076   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:01.721095   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 10:33:01.721171   26720 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 10:33:01.722540   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.724188   26720 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 10:33:01.725637   26720 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 10:33:01.725656   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 10:33:01.725675   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.727641   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0127 10:33:01.728225   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.728753   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.728770   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.728838   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.729313   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.729342   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.729381   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.729476   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.729680   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.729701   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.729854   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.729981   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.731062   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40309
	I0127 10:33:01.731696   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.731966   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.732573   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.732591   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.732994   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.733535   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.733576   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.733777   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0127 10:33:01.734155   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.734638   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0127 10:33:01.734644   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 10:33:01.734725   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.734741   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.735134   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.735596   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.735660   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.735823   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.735947   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.736019   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.736104   26720 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 10:33:01.736121   26720 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 10:33:01.736122   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.736141   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.739583   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.739594   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.739998   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.740025   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.740215   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.740387   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.740577   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.740701   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.741186   26720 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 10:33:01.742317   26720 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 10:33:01.742341   26720 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 10:33:01.742360   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.747490   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.747945   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.747961   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.748148   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.748462   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.748671   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.748830   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.749514   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I0127 10:33:01.750607   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I0127 10:33:01.750824   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.751027   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.751429   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.751446   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.751510   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0127 10:33:01.751726   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.751743   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.751905   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.751992   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.752036   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.752365   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.752473   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:01.752522   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:01.753461   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.753478   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.753784   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.753970   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.754478   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.755542   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.756036   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0127 10:33:01.756417   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.756952   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.756969   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.757546   26720 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 10:33:01.757610   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 10:33:01.757831   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.758068   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I0127 10:33:01.758094   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.758566   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.759015   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.759039   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.759353   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.759638   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.759975   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.760217   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 10:33:01.760281   26720 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 10:33:01.761310   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.761506   26720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 10:33:01.761518   26720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 10:33:01.761531   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.761861   26720 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 10:33:01.763270   26720 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 10:33:01.763286   26720 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 10:33:01.763288   26720 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 10:33:01.763299   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.763361   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 10:33:01.764567   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 10:33:01.764628   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.764715   26720 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 10:33:01.764730   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 10:33:01.764747   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.765525   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.765545   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.765563   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.765708   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.765837   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.766628   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.766943   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 10:33:01.767466   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.767936   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.767968   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.768161   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.768382   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.768528   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.768968   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.769302   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	W0127 10:33:01.769602   26720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34984->192.168.39.92:22: read: connection reset by peer
	I0127 10:33:01.769626   26720 retry.go:31] will retry after 336.172337ms: ssh: handshake failed: read tcp 192.168.39.1:34984->192.168.39.92:22: read: connection reset by peer
	I0127 10:33:01.769793   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.769812   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.770013   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.770113   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 10:33:01.770230   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.770374   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.770478   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.772443   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 10:33:01.772617   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0127 10:33:01.772909   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:01.773445   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:01.773456   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:01.773797   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:01.773981   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:01.774785   26720 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 10:33:01.775541   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:01.776084   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 10:33:01.776107   26720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 10:33:01.776124   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.777044   26720 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 10:33:01.778378   26720 out.go:177]   - Using image docker.io/busybox:stable
	I0127 10:33:01.779643   26720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 10:33:01.779659   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 10:33:01.779674   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:01.779824   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.780167   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.780184   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.780493   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.780656   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.780767   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.780863   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:01.782819   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.783218   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:01.783241   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:01.783434   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:01.783572   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:01.783706   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:01.783825   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:02.020889   26720 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 10:33:02.020911   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 10:33:02.072104   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 10:33:02.074652   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 10:33:02.075322   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 10:33:02.084792   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 10:33:02.096167   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 10:33:02.113449   26720 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 10:33:02.113474   26720 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 10:33:02.116767   26720 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 10:33:02.116790   26720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 10:33:02.126749   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 10:33:02.149532   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 10:33:02.149573   26720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 10:33:02.158542   26720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 10:33:02.158569   26720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 10:33:02.162452   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 10:33:02.178082   26720 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 10:33:02.178107   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 10:33:02.191533   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 10:33:02.209470   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 10:33:02.257833   26720 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 10:33:02.257857   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 10:33:02.284135   26720 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 10:33:02.284161   26720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 10:33:02.313849   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 10:33:02.313873   26720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 10:33:02.316490   26720 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 10:33:02.316506   26720 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 10:33:02.406808   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 10:33:02.661722   26720 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 10:33:02.661749   26720 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 10:33:02.686384   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 10:33:02.686415   26720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 10:33:02.687949   26720 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 10:33:02.687970   26720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 10:33:02.836444   26720 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 10:33:02.836472   26720 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 10:33:02.916320   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 10:33:02.942889   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 10:33:02.942929   26720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 10:33:02.963055   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 10:33:02.963082   26720 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 10:33:03.097546   26720 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 10:33:03.097567   26720 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 10:33:03.109365   26720 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 10:33:03.109391   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 10:33:03.126301   26720 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 10:33:03.126323   26720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 10:33:03.349734   26720 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 10:33:03.349762   26720 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 10:33:03.467788   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 10:33:03.487447   26720 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 10:33:03.487506   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 10:33:03.637107   26720 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 10:33:03.637138   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 10:33:03.723271   26720 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 10:33:03.723304   26720 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 10:33:03.788103   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 10:33:03.976308   26720 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 10:33:03.976332   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 10:33:04.191543   26720 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 10:33:04.191564   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 10:33:04.411413   26720 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 10:33:04.411442   26720 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 10:33:04.689121   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 10:33:05.549756   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.477617709s)
	I0127 10:33:05.549816   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:05.549822   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.475138804s)
	I0127 10:33:05.549829   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:05.549863   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:05.549935   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:05.550284   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:05.550303   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:05.550318   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:05.550319   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:05.550343   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:05.550359   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:05.550361   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:05.550372   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:05.550376   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:05.550380   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:05.550587   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:05.550600   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:05.550673   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:05.550685   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:05.907435   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.832081847s)
	I0127 10:33:05.907502   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:05.907513   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:05.907792   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:05.907818   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:05.907834   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:05.907850   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:05.907869   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:05.908122   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:05.908138   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:06.638288   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.553455525s)
	I0127 10:33:06.638346   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:06.638362   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:06.638639   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:06.638656   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:06.638666   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:06.638674   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:06.639001   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:06.639025   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:06.639036   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:08.594264   26720 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 10:33:08.594308   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:08.597558   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:08.597980   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:08.598006   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:08.598192   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:08.598402   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:08.598587   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:08.598742   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:08.842906   26720 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 10:33:08.940518   26720 addons.go:238] Setting addon gcp-auth=true in "addons-952541"
	I0127 10:33:08.940572   26720 host.go:66] Checking if "addons-952541" exists ...
	I0127 10:33:08.940893   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:08.940934   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:08.957233   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
	I0127 10:33:08.957719   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:08.958186   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:08.958208   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:08.958567   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:08.959111   26720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:33:08.959157   26720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:33:08.975009   26720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0127 10:33:08.975472   26720 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:33:08.975945   26720 main.go:141] libmachine: Using API Version  1
	I0127 10:33:08.975966   26720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:33:08.976328   26720 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:33:08.976515   26720 main.go:141] libmachine: (addons-952541) Calling .GetState
	I0127 10:33:08.978252   26720 main.go:141] libmachine: (addons-952541) Calling .DriverName
	I0127 10:33:08.978530   26720 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 10:33:08.978581   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHHostname
	I0127 10:33:08.981641   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:08.982142   26720 main.go:141] libmachine: (addons-952541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:39:b2", ip: ""} in network mk-addons-952541: {Iface:virbr1 ExpiryTime:2025-01-27 11:32:32 +0000 UTC Type:0 Mac:52:54:00:e7:39:b2 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-952541 Clientid:01:52:54:00:e7:39:b2}
	I0127 10:33:08.982178   26720 main.go:141] libmachine: (addons-952541) DBG | domain addons-952541 has defined IP address 192.168.39.92 and MAC address 52:54:00:e7:39:b2 in network mk-addons-952541
	I0127 10:33:08.982337   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHPort
	I0127 10:33:08.982498   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHKeyPath
	I0127 10:33:08.982653   26720 main.go:141] libmachine: (addons-952541) Calling .GetSSHUsername
	I0127 10:33:08.982775   26720 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/addons-952541/id_rsa Username:docker}
	I0127 10:33:09.483884   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.387684277s)
	I0127 10:33:09.483927   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.483937   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.483977   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.357196608s)
	I0127 10:33:09.484021   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484031   26720 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.325441735s)
	I0127 10:33:09.484051   26720 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 10:33:09.484062   26720 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.325490705s)
	I0127 10:33:09.484035   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.484130   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.321653068s)
	I0127 10:33:09.484160   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484172   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.484212   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.292649757s)
	I0127 10:33:09.484250   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484252   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.274755738s)
	I0127 10:33:09.484261   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.484269   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484279   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.484361   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.077526677s)
	I0127 10:33:09.484376   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484384   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.484490   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.568142156s)
	I0127 10:33:09.484506   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484529   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.484680   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.01685681s)
	W0127 10:33:09.484715   26720 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 10:33:09.484741   26720 retry.go:31] will retry after 311.166188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 10:33:09.484812   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.696678806s)
	I0127 10:33:09.484831   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.484841   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.485192   26720 node_ready.go:35] waiting up to 6m0s for node "addons-952541" to be "Ready" ...
	I0127 10:33:09.486847   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.486850   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.486883   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.486894   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.486899   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.486903   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.486911   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.486923   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.486930   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.486949   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.486885   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.486960   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.486968   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.486976   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.486983   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.486990   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.487046   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487054   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487062   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.487069   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.487099   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.487118   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.487126   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.487131   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487138   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487146   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.487152   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.487166   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.487195   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.487216   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487221   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487228   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.487233   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.487244   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487254   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487262   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.487269   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.486969   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.487337   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.487449   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.487478   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.487496   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487503   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487508   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487518   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487521   26720 addons.go:479] Verifying addon metrics-server=true in "addons-952541"
	I0127 10:33:09.487526   26720 addons.go:479] Verifying addon registry=true in "addons-952541"
	I0127 10:33:09.487700   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.487727   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.487733   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.487998   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.488085   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.488094   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.488274   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.488302   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.488312   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.488316   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.488343   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.488350   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.488360   26720 addons.go:479] Verifying addon ingress=true in "addons-952541"
	I0127 10:33:09.488785   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.488815   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.489838   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.488830   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.488856   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.489972   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.490008   26720 out.go:177] * Verifying registry addon...
	I0127 10:33:09.491056   26720 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-952541 service yakd-dashboard -n yakd-dashboard
	
	I0127 10:33:09.491066   26720 out.go:177] * Verifying ingress addon...
	I0127 10:33:09.491947   26720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 10:33:09.492770   26720 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 10:33:09.506710   26720 node_ready.go:49] node "addons-952541" has status "Ready":"True"
	I0127 10:33:09.506732   26720 node_ready.go:38] duration metric: took 21.526228ms for node "addons-952541" to be "Ready" ...
	I0127 10:33:09.506741   26720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 10:33:09.526810   26720 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 10:33:09.526839   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:09.540988   26720 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 10:33:09.541010   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:09.549153   26720 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:09.562748   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.562772   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.563043   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:09.563045   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.563070   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 10:33:09.563146   26720 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0127 10:33:09.570679   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:09.570698   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:09.570978   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:09.570997   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:09.797120   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 10:33:09.987824   26720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-952541" context rescaled to 1 replicas
	I0127 10:33:10.004526   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:10.004551   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:10.299471   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.610304517s)
	I0127 10:33:10.299533   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:10.299551   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:10.299536   26720 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.320978649s)
	I0127 10:33:10.299792   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:10.299847   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:10.299866   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:10.299899   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:10.299912   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:10.300151   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:10.300162   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:10.300173   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:10.300183   26720 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-952541"
	I0127 10:33:10.302152   26720 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 10:33:10.302186   26720 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 10:33:10.303576   26720 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 10:33:10.304237   26720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 10:33:10.304675   26720 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 10:33:10.304689   26720 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 10:33:10.354109   26720 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 10:33:10.354134   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:10.425725   26720 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 10:33:10.425747   26720 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 10:33:10.497620   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:10.503748   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:10.566926   26720 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 10:33:10.566949   26720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 10:33:10.630066   26720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 10:33:10.809910   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:10.996767   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:10.997461   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:11.325512   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:11.496839   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:11.497158   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:11.555374   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:11.808929   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:12.012861   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:12.018774   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:12.232553   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.602440266s)
	I0127 10:33:12.232603   26720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.435437629s)
	I0127 10:33:12.232641   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:12.232742   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:12.232707   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:12.232809   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:12.233069   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:12.233109   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:12.233118   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:12.233112   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:12.233130   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:12.233149   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:12.233163   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:12.233173   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:12.233133   26720 main.go:141] libmachine: Making call to close driver server
	I0127 10:33:12.233201   26720 main.go:141] libmachine: (addons-952541) Calling .Close
	I0127 10:33:12.233430   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:12.233444   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:12.233451   26720 main.go:141] libmachine: (addons-952541) DBG | Closing plugin on server side
	I0127 10:33:12.233476   26720 main.go:141] libmachine: Successfully made call to close driver server
	I0127 10:33:12.233489   26720 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 10:33:12.234871   26720 addons.go:479] Verifying addon gcp-auth=true in "addons-952541"
	I0127 10:33:12.236708   26720 out.go:177] * Verifying gcp-auth addon...
	I0127 10:33:12.238960   26720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 10:33:12.242267   26720 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 10:33:12.242289   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:12.309075   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:12.501984   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:12.502087   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:12.743264   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:12.845856   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:12.996480   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:12.998721   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:13.274475   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:13.309228   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:13.500990   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:13.501150   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:13.742405   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:13.810064   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:13.996976   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:13.997145   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:14.055000   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:14.242855   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:14.308255   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:14.495909   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:14.496256   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:14.742768   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:14.809099   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:14.995349   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:14.996297   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:15.241721   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:15.308525   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:15.496722   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:15.496862   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:15.742616   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:15.808138   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:15.995500   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:15.997114   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:16.056548   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:16.243012   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:16.309070   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:16.496812   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:16.497434   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:16.742141   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:16.809081   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:16.996762   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:16.996795   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:17.505755   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:17.506141   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:17.506895   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:17.507601   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:17.742629   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:17.809074   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:17.996116   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:17.997112   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:18.243827   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:18.309091   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:18.497256   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:18.497944   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:18.555621   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:18.744033   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:18.808709   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:18.997020   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:18.997537   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:19.242685   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:19.308446   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:19.495440   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:19.498305   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:19.743365   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:19.809553   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:20.424791   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:20.425237   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:20.426037   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:20.426969   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:20.523210   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:20.523489   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:20.555668   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:20.742294   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:20.810052   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:20.996890   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:20.997212   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:21.242711   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:21.308127   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:21.495720   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:21.496874   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:21.742280   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:21.809420   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:21.995206   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:21.997262   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:22.242605   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:22.309827   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:22.686289   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:22.686973   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:22.687222   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:22.742809   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:22.809321   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:22.996617   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:22.998316   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:23.242957   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:23.309229   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:23.496671   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:23.497888   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:23.742116   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:23.808778   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:23.996451   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:23.996736   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:24.243136   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:24.310047   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:24.497658   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:24.498566   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:24.742499   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:24.809090   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:24.995208   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:24.997680   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:25.055030   26720 pod_ready.go:103] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:25.243244   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:25.345571   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:25.497367   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:25.498247   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:25.558321   26720 pod_ready.go:93] pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:25.558341   26720 pod_ready.go:82] duration metric: took 16.009158053s for pod "amd-gpu-device-plugin-n8hnv" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.558349   26720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7tqw5" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.563109   26720 pod_ready.go:93] pod "coredns-668d6bf9bc-7tqw5" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:25.563128   26720 pod_ready.go:82] duration metric: took 4.772556ms for pod "coredns-668d6bf9bc-7tqw5" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.563140   26720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gc7cq" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.564720   26720 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-gc7cq" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gc7cq" not found
	I0127 10:33:25.564736   26720 pod_ready.go:82] duration metric: took 1.589222ms for pod "coredns-668d6bf9bc-gc7cq" in "kube-system" namespace to be "Ready" ...
	E0127 10:33:25.564743   26720 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-gc7cq" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gc7cq" not found
	I0127 10:33:25.564749   26720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.568990   26720 pod_ready.go:93] pod "etcd-addons-952541" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:25.569011   26720 pod_ready.go:82] duration metric: took 4.255537ms for pod "etcd-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.569018   26720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.572659   26720 pod_ready.go:93] pod "kube-apiserver-addons-952541" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:25.572677   26720 pod_ready.go:82] duration metric: took 3.653208ms for pod "kube-apiserver-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.572685   26720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.742793   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:25.752730   26720 pod_ready.go:93] pod "kube-controller-manager-addons-952541" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:25.752750   26720 pod_ready.go:82] duration metric: took 180.059283ms for pod "kube-controller-manager-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.752761   26720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4pggj" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:25.808582   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:25.995800   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:25.996640   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:26.153882   26720 pod_ready.go:93] pod "kube-proxy-4pggj" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:26.153908   26720 pod_ready.go:82] duration metric: took 401.141063ms for pod "kube-proxy-4pggj" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:26.153918   26720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:26.242867   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:26.308291   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:26.497820   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:26.498143   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:26.553281   26720 pod_ready.go:93] pod "kube-scheduler-addons-952541" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:26.553330   26720 pod_ready.go:82] duration metric: took 399.403325ms for pod "kube-scheduler-addons-952541" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:26.553345   26720 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:26.742737   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:26.807799   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:26.996895   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:26.997116   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:27.242750   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:27.308922   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:27.496535   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:27.497257   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:27.742311   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:27.808706   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:27.996267   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:27.996866   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:28.242961   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:28.309376   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:28.495110   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:28.497338   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:28.559352   26720 pod_ready.go:103] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:28.742589   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:28.809612   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:28.996833   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:28.997208   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:29.242774   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:29.308678   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:29.496892   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:29.497012   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:29.742019   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:29.808925   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:29.997708   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:29.999086   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:30.242929   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:30.308153   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:30.495923   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:30.497455   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:30.742913   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:30.809263   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:30.995343   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:30.996950   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:31.059997   26720 pod_ready.go:103] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:31.243272   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:31.308971   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:31.505273   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:31.505326   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:31.742285   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:31.809883   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:31.997174   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:31.997288   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:32.241969   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:32.309347   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:32.497023   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:32.497728   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:32.743201   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:32.808925   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:32.997032   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:32.999681   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:33.060261   26720 pod_ready.go:103] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:33.242155   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:33.308764   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:33.495586   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:33.496639   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:33.742877   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:33.809076   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:33.995598   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:33.996896   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:34.242822   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:34.308757   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:34.496600   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:34.496843   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:34.742786   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:34.809858   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:34.996041   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:34.996874   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:35.242156   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:35.309097   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:35.496778   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:35.497896   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:35.559861   26720 pod_ready.go:103] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:35.743017   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:35.810198   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:35.995590   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:35.996721   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:36.242543   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:36.309658   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:36.598647   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:36.598978   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:36.745908   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:36.808091   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:36.996138   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:36.996959   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:37.242881   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:37.308845   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:37.495318   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:37.496638   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:37.746026   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:37.808423   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:37.996658   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:37.997318   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:38.058303   26720 pod_ready.go:103] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:38.242116   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:38.308570   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:38.496494   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:38.496677   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:38.881597   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:38.881837   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:38.996837   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:38.996925   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:39.242809   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:39.308170   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:39.496797   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:39.496953   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:39.743318   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:39.808645   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:39.996610   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:39.998019   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:40.242153   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:40.309131   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:40.496762   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:40.496836   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:40.558312   26720 pod_ready.go:103] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"False"
	I0127 10:33:40.760748   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:40.855019   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:40.995632   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:40.997280   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:41.059785   26720 pod_ready.go:93] pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:41.059806   26720 pod_ready.go:82] duration metric: took 14.506453656s for pod "metrics-server-7fbb699795-xb877" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:41.059816   26720 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7gblr" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:41.064653   26720 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7gblr" in "kube-system" namespace has status "Ready":"True"
	I0127 10:33:41.064675   26720 pod_ready.go:82] duration metric: took 4.85301ms for pod "nvidia-device-plugin-daemonset-7gblr" in "kube-system" namespace to be "Ready" ...
	I0127 10:33:41.064690   26720 pod_ready.go:39] duration metric: took 31.557940535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 10:33:41.064704   26720 api_server.go:52] waiting for apiserver process to appear ...
	I0127 10:33:41.064748   26720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 10:33:41.081148   26720 api_server.go:72] duration metric: took 39.485650435s to wait for apiserver process to appear ...
	I0127 10:33:41.081170   26720 api_server.go:88] waiting for apiserver healthz status ...
	I0127 10:33:41.081189   26720 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0127 10:33:41.085623   26720 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0127 10:33:41.086519   26720 api_server.go:141] control plane version: v1.32.1
	I0127 10:33:41.086550   26720 api_server.go:131] duration metric: took 5.371525ms to wait for apiserver health ...
	I0127 10:33:41.086560   26720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 10:33:41.094259   26720 system_pods.go:59] 18 kube-system pods found
	I0127 10:33:41.094286   26720 system_pods.go:61] "amd-gpu-device-plugin-n8hnv" [6e8e9a84-0604-42e0-a2ba-855e1352ac7b] Running
	I0127 10:33:41.094291   26720 system_pods.go:61] "coredns-668d6bf9bc-7tqw5" [643e9d5f-72da-4abc-9f0b-df4142b99b65] Running
	I0127 10:33:41.094298   26720 system_pods.go:61] "csi-hostpath-attacher-0" [fc58288d-b9a6-4572-8e4b-ef94220aaefb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 10:33:41.094305   26720 system_pods.go:61] "csi-hostpath-resizer-0" [e0675058-e6e7-473b-ba1f-84eec8ce2a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 10:33:41.094312   26720 system_pods.go:61] "csi-hostpathplugin-gbr9l" [cc0cf5da-eeda-49b4-ba83-b701b60f9245] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 10:33:41.094317   26720 system_pods.go:61] "etcd-addons-952541" [eaa77c8e-46dc-4e5f-8703-cef804ef87d5] Running
	I0127 10:33:41.094321   26720 system_pods.go:61] "kube-apiserver-addons-952541" [378a1836-ffb1-413c-bdfa-8550e8327e92] Running
	I0127 10:33:41.094325   26720 system_pods.go:61] "kube-controller-manager-addons-952541" [7b269901-0897-4146-b256-a83d707c735c] Running
	I0127 10:33:41.094334   26720 system_pods.go:61] "kube-ingress-dns-minikube" [2ed18e2f-a7dd-42af-a661-badafdabdb84] Running
	I0127 10:33:41.094340   26720 system_pods.go:61] "kube-proxy-4pggj" [94ffd4e7-a667-4191-959e-3b1d29c5c3a0] Running
	I0127 10:33:41.094352   26720 system_pods.go:61] "kube-scheduler-addons-952541" [20a86742-a6ee-4149-8a49-9523ab4328bc] Running
	I0127 10:33:41.094358   26720 system_pods.go:61] "metrics-server-7fbb699795-xb877" [8543ef6f-a533-4a1a-ba47-60aeb9645aab] Running
	I0127 10:33:41.094364   26720 system_pods.go:61] "nvidia-device-plugin-daemonset-7gblr" [47f69620-0d36-4f43-8761-a6a2f69daf77] Running
	I0127 10:33:41.094377   26720 system_pods.go:61] "registry-6c88467877-674ww" [927afa7c-f786-406c-96cb-762022cff929] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 10:33:41.094390   26720 system_pods.go:61] "registry-proxy-qh979" [cb7be8e2-7920-425c-9305-451bcbf8f865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 10:33:41.094405   26720 system_pods.go:61] "snapshot-controller-68b874b76f-9w48s" [88499b05-bb24-4ad9-8f28-ca68617c4f02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 10:33:41.094422   26720 system_pods.go:61] "snapshot-controller-68b874b76f-zsnwn" [d0799899-c6db-49fd-8793-f29b6a04ef64] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 10:33:41.094433   26720 system_pods.go:61] "storage-provisioner" [f99529af-ac0e-4e45-969f-8d44c1b4877e] Running
	I0127 10:33:41.094446   26720 system_pods.go:74] duration metric: took 7.875414ms to wait for pod list to return data ...
	I0127 10:33:41.094465   26720 default_sa.go:34] waiting for default service account to be created ...
	I0127 10:33:41.096450   26720 default_sa.go:45] found service account: "default"
	I0127 10:33:41.096473   26720 default_sa.go:55] duration metric: took 1.998484ms for default service account to be created ...
	I0127 10:33:41.096483   26720 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 10:33:41.103127   26720 system_pods.go:87] 18 kube-system pods found
	I0127 10:33:41.105316   26720 system_pods.go:105] "amd-gpu-device-plugin-n8hnv" [6e8e9a84-0604-42e0-a2ba-855e1352ac7b] Running
	I0127 10:33:41.105337   26720 system_pods.go:105] "coredns-668d6bf9bc-7tqw5" [643e9d5f-72da-4abc-9f0b-df4142b99b65] Running
	I0127 10:33:41.105345   26720 system_pods.go:105] "csi-hostpath-attacher-0" [fc58288d-b9a6-4572-8e4b-ef94220aaefb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 10:33:41.105351   26720 system_pods.go:105] "csi-hostpath-resizer-0" [e0675058-e6e7-473b-ba1f-84eec8ce2a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 10:33:41.105359   26720 system_pods.go:105] "csi-hostpathplugin-gbr9l" [cc0cf5da-eeda-49b4-ba83-b701b60f9245] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 10:33:41.105365   26720 system_pods.go:105] "etcd-addons-952541" [eaa77c8e-46dc-4e5f-8703-cef804ef87d5] Running
	I0127 10:33:41.105370   26720 system_pods.go:105] "kube-apiserver-addons-952541" [378a1836-ffb1-413c-bdfa-8550e8327e92] Running
	I0127 10:33:41.105375   26720 system_pods.go:105] "kube-controller-manager-addons-952541" [7b269901-0897-4146-b256-a83d707c735c] Running
	I0127 10:33:41.105380   26720 system_pods.go:105] "kube-ingress-dns-minikube" [2ed18e2f-a7dd-42af-a661-badafdabdb84] Running
	I0127 10:33:41.105384   26720 system_pods.go:105] "kube-proxy-4pggj" [94ffd4e7-a667-4191-959e-3b1d29c5c3a0] Running
	I0127 10:33:41.105389   26720 system_pods.go:105] "kube-scheduler-addons-952541" [20a86742-a6ee-4149-8a49-9523ab4328bc] Running
	I0127 10:33:41.105396   26720 system_pods.go:105] "metrics-server-7fbb699795-xb877" [8543ef6f-a533-4a1a-ba47-60aeb9645aab] Running
	I0127 10:33:41.105400   26720 system_pods.go:105] "nvidia-device-plugin-daemonset-7gblr" [47f69620-0d36-4f43-8761-a6a2f69daf77] Running
	I0127 10:33:41.105408   26720 system_pods.go:105] "registry-6c88467877-674ww" [927afa7c-f786-406c-96cb-762022cff929] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 10:33:41.105416   26720 system_pods.go:105] "registry-proxy-qh979" [cb7be8e2-7920-425c-9305-451bcbf8f865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 10:33:41.105427   26720 system_pods.go:105] "snapshot-controller-68b874b76f-9w48s" [88499b05-bb24-4ad9-8f28-ca68617c4f02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 10:33:41.105436   26720 system_pods.go:105] "snapshot-controller-68b874b76f-zsnwn" [d0799899-c6db-49fd-8793-f29b6a04ef64] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 10:33:41.105440   26720 system_pods.go:105] "storage-provisioner" [f99529af-ac0e-4e45-969f-8d44c1b4877e] Running
	I0127 10:33:41.105447   26720 system_pods.go:147] duration metric: took 8.958013ms to wait for k8s-apps to be running ...
	I0127 10:33:41.105455   26720 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 10:33:41.105493   26720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:33:41.119259   26720 system_svc.go:56] duration metric: took 13.797851ms WaitForService to wait for kubelet
	I0127 10:33:41.119282   26720 kubeadm.go:582] duration metric: took 39.523784857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 10:33:41.119303   26720 node_conditions.go:102] verifying NodePressure condition ...
	I0127 10:33:41.122027   26720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 10:33:41.122048   26720 node_conditions.go:123] node cpu capacity is 2
	I0127 10:33:41.122061   26720 node_conditions.go:105] duration metric: took 2.752822ms to run NodePressure ...
	I0127 10:33:41.122075   26720 start.go:241] waiting for startup goroutines ...
	I0127 10:33:41.242827   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:41.308645   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:41.496761   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:41.496882   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:41.742358   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:41.809018   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:41.996554   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:41.996801   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:42.242863   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:42.308854   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:42.495297   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:42.497298   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:42.742940   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:42.809117   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:42.996330   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:42.997124   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:43.243193   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:43.309542   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:43.496739   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:43.497806   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:43.742832   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:43.808653   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:43.996950   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:43.997236   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:44.242522   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:44.308589   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:44.497033   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:44.499562   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:44.743228   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:44.808921   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:45.238169   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:45.238732   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:45.241952   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:45.420377   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:45.495793   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:45.496878   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:45.742472   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:45.809326   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:45.997329   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:45.997620   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:46.242665   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:46.309061   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:46.496337   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:46.496737   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:46.742599   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:46.809349   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:46.996037   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:46.997254   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:47.249443   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:47.308791   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:47.496165   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:47.496987   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:47.744133   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:47.808824   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:47.995294   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:47.996817   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:48.242139   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:48.309417   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:48.495757   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:48.496743   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:48.745213   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:48.813763   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:48.997010   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:48.997065   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:49.242925   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:49.308413   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:49.496103   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:49.496913   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:49.742717   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:49.813081   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:49.995055   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:49.996797   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:50.242660   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:50.308656   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:50.496905   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:50.497207   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:50.742931   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:50.808524   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:50.997170   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:50.997569   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:51.242418   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:51.309599   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:51.497170   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:51.497720   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:51.742549   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:51.808134   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:52.202763   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:52.204047   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:52.242344   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:52.308459   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:52.496586   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 10:33:52.496988   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:52.742553   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:52.809497   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:52.996116   26720 kapi.go:107] duration metric: took 43.504164564s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 10:33:52.997108   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:53.242809   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:53.309399   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:53.497831   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:53.742521   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:53.809829   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:53.996775   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:54.241792   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:54.308803   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:54.497738   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:54.743121   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:54.809405   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:54.996651   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:55.245010   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:55.312108   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:55.497379   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:55.742510   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:55.809219   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:55.997113   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:56.242840   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:56.309031   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:56.497554   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:56.743425   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:56.810019   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:56.996590   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:57.241797   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:57.336579   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:57.496642   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:57.742523   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:57.808217   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:57.997121   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:58.242989   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:58.308696   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:58.497015   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:58.742582   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:58.815409   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:58.997670   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:59.243061   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:59.308860   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:33:59.497596   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:33:59.742133   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:33:59.808668   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:00.000822   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:00.242431   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:00.309327   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:00.497210   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:00.742466   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:00.809830   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:00.996444   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:01.244129   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:01.310311   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:01.497191   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:01.743867   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:01.810127   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:01.997139   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:02.243847   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:02.308299   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:02.497102   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:02.742817   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:02.808337   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:02.996810   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:03.242600   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:03.309319   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:03.852124   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:03.852903   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:03.853286   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:03.998418   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:04.245896   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:04.309001   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:04.497088   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:04.742451   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:04.808792   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:04.996534   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:05.243364   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:05.309645   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:05.497070   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:05.742098   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:05.844741   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:05.997069   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:06.242298   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:06.308852   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:06.497966   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:06.742913   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:06.809691   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:06.999367   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:07.242406   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:07.343704   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:07.496880   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:07.742081   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:07.808427   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:07.997267   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:08.242759   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:08.309937   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:08.496734   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:08.744680   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:08.848016   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:08.996728   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:09.242569   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:09.309487   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:09.498392   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:09.742728   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:09.809356   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:09.997997   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:10.242004   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:10.308866   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:10.497033   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:10.742534   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:10.808879   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:10.997766   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:11.243292   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:11.309461   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:11.497514   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:11.742968   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:11.856275   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:11.996645   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:12.243135   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:12.311020   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:12.497175   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:12.743338   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:12.844377   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:13.006346   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:13.242974   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:13.308486   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:13.497509   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:13.742547   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:13.808132   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:13.996888   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:14.245320   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:14.348850   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:14.497068   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:14.743309   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:14.809741   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:14.997770   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:15.242405   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:15.311076   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:15.496923   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:15.743013   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:15.809849   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:15.998209   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:16.242506   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:16.309180   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:16.497476   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:16.743180   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:16.809020   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:16.996630   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:17.244426   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:17.309402   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:17.497694   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:17.743907   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:17.810752   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:18.231103   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:18.329875   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:18.332229   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:18.496618   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:18.741855   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:18.809238   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:18.996794   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:19.241984   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:19.309534   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:19.497535   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:20.073933   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:20.074809   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:20.075529   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:20.241726   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:20.309130   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:20.497354   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:20.742835   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:20.808903   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:20.997204   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:21.242895   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:21.308412   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:21.499354   26720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 10:34:21.742612   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:21.809902   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:21.998292   26720 kapi.go:107] duration metric: took 1m12.505518834s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 10:34:22.242373   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:22.345229   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:22.742275   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:22.809591   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 10:34:23.241873   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:23.308219   26720 kapi.go:107] duration metric: took 1m13.003977454s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 10:34:23.742673   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:24.241860   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:24.742302   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:25.244846   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:25.742730   26720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 10:34:26.242642   26720 kapi.go:107] duration metric: took 1m14.00367845s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 10:34:26.244490   26720 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-952541 cluster.
	I0127 10:34:26.245864   26720 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 10:34:26.247198   26720 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 10:34:26.248577   26720 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, nvidia-device-plugin, amd-gpu-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0127 10:34:26.250913   26720 addons.go:514] duration metric: took 1m24.655371751s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner inspektor-gadget metrics-server nvidia-device-plugin amd-gpu-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0127 10:34:26.250952   26720 start.go:246] waiting for cluster config update ...
	I0127 10:34:26.250976   26720 start.go:255] writing updated cluster config ...
	I0127 10:34:26.251251   26720 ssh_runner.go:195] Run: rm -f paused
	I0127 10:34:26.315267   26720 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 10:34:26.317188   26720 out.go:177] * Done! kubectl is now configured to use "addons-952541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.686050418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a0e5323-0859-493b-9ca8-0d29570c203f name=/runtime.v1.RuntimeService/Version
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.687294660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14613a7d-5cc5-4651-8a8f-6fe8f25d51ee name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.688974926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974255688941869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14613a7d-5cc5-4651-8a8f-6fe8f25d51ee name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.689475790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43cf2563-7e91-4a02-b7fd-c3f78a2ec264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.689530924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43cf2563-7e91-4a02-b7fd-c3f78a2ec264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.689864926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1626d1659c7c2bb33367f9cb3dd7ce50c74d8c058990b2310ee3ec95d75bf12,PodSandboxId:004417e4847f8e87836256f756061dc43e324eac81ef62c6f42f8a38d19c8191,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737974115741600797,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9fe24c9f-36f9-4f56-b5db-b573fec024ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bff083babac41c16ff3e7d8c09e84acee3dcfb348c56ca61608b41a24f6e9e9,PodSandboxId:b016bd5580c3c5fe26f39722c22d6dd8af3523e505648c424d13d15f07a6dfc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737974069556472941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2cc622-bc1a-4351-a130-88e84e385bef,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee7ec3ecdbb23a1900f37cb08a87483409840c3aa1a90f1cbbe86307732b2cdf,PodSandboxId:f3a45e931ffab50911732fcd494e714abe2c05f5f2d597e11721a5aa75eb1be5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737974060783562960,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qd8sr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0bd4e24f-ff7a-44ab-88fe-dd2e31e58c04,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0f12e823745126c800b6a6098dc8da2ab1d04a3817c3c81d0a54079195d5a4d7,PodSandboxId:a5a84b450ef7417e0f21e3b759f8ef7db2727e05c13457dafac55bc8bf6f4d75,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737974044034620105,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vvrbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80bcb9df-f731-4d0e-84a0-dffa3ef19beb,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35a3081037ae0e5abd384c6cf23eb406d66cfcfd927ef330f2e18150934a1c7,PodSandboxId:972e1dc5a0c7e2e617dc07cbe16f80b25b8b1d25786a272ee9437573a5ea848a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737974038798348334,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t267v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88a96863-a200-4332-8b25-da14f0bd6a5b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54611886a789912b64e88bf3d65d32ff8e27ed7b8db2bbcd766a776e251180e9,PodSandboxId:2fffc58411b70130e971f0b0ed55bdcf98952e87e347b865ba643ef8d1d68089,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737974012304257947,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed18e2f-a7dd-42af-a661-badafdabdb84,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6950ed8930b93aa829552c3a07bc5ec7d8d1d127245adcee53b2cc9dff92a051,PodSandboxId:fd0d5df91514c79fa4d80c82be8be8ded4c8ff724500f9039471fdba81a4fa78,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737974005181569838,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-n8hnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8e9a84-0604-42e0-a2ba-855e1352ac7b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d445ef49581e8d23332a68f7a6311618fcb48dc2a3dad0a71552cb96f9b8653,PodSandboxId:a9e47807d91faa073d5d299434478c7a2b0394b20d4297118bbf9fc79dbccd05,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737973987412681462,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99529af-ac0e-4e45-969f-8d44c1b4877e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a113cc7636ba124ab4ee7d4686c616aa0e4178819e581422385f8c9108b2ed79,PodSandboxId:7bb138b4cecc1c9683ad69940a70d0e6582832ffe88c43fc964620fb79e36118,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737973987361230444,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7tqw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643e9d5f-72da-4abc-9f0b-df4142b99b65,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:171301b3d9d6bc2e22b2d8568a1d201201ad84d01f5bf6c06d2eff00abbbdd00,PodSandboxId:3bb5859198d16733afb9b34dc9cc9cad0fcf5d04419db83340fb27e886d27a6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737973983649843976,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4pggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94ffd4e7-a667-4191-959e-3b1d29c5c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b98aaa5ba855f8a1d1458ff13cc
998fd8efb705305efe1128f9ff321d959265,PodSandboxId:27d7f27acfcbf02854f59d2602292efcb98e612c75daaaddac98722c22eaefb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737973972624883129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b8e106beef1aa022041115aeed93c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35c210b609f92b11bc0a229321f2d3f140c981b37ee18aa38f5fdb40fc84a83,PodSandbox
Id:e347b1db653582b11dc29525e0d19bb6a0210585efdd430fe0c05a8f91b38124,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737973972611742630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b925762dbf0409e51fe4e3d09d993e43,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e16adf1add8403eccbc7adff1e26d2e70a14352e1f8f802b3230f5ac76ffb0,P
odSandboxId:c1f911fea2ed7e80495626e12f0b694593a1e0016eb5e2416aeb075b5b9fb631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737973972631333872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24872a2276edd2ca9dbb4a57d1346a7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069ac489954e475c7ee91d39eb8df0a4de462d1805b143170099b291b56a460f,PodSandboxId:2a7fc
a42d313b4da688b6d11a576f4f1b722469bb539df548cf5a471ae1ddabc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737973972542188569,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c63bce282aa77d92ef9c27a7f767c90,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43cf2563-7e91-4a02-b7fd-c3f78a2ec264 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.706866825Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.707108154Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708262759Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708365541Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708430361Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708485045Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708556355Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708603676Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708642825Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708693256Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708749082Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.708876634Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.727292368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f580860c-3818-424e-b160-97373f681730 name=/runtime.v1.RuntimeService/Version
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.727358802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f580860c-3818-424e-b160-97373f681730 name=/runtime.v1.RuntimeService/Version
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.728538628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9da0267-7dce-4a0b-bec3-fda1c00f1b12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.729690610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974255729665964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9da0267-7dce-4a0b-bec3-fda1c00f1b12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.730254306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef9d7d72-71bf-4c51-b420-ba3e1893748d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.730309794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef9d7d72-71bf-4c51-b420-ba3e1893748d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 10:37:35 addons-952541 crio[662]: time="2025-01-27 10:37:35.730593547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1626d1659c7c2bb33367f9cb3dd7ce50c74d8c058990b2310ee3ec95d75bf12,PodSandboxId:004417e4847f8e87836256f756061dc43e324eac81ef62c6f42f8a38d19c8191,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737974115741600797,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9fe24c9f-36f9-4f56-b5db-b573fec024ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bff083babac41c16ff3e7d8c09e84acee3dcfb348c56ca61608b41a24f6e9e9,PodSandboxId:b016bd5580c3c5fe26f39722c22d6dd8af3523e505648c424d13d15f07a6dfc4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737974069556472941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2cc622-bc1a-4351-a130-88e84e385bef,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee7ec3ecdbb23a1900f37cb08a87483409840c3aa1a90f1cbbe86307732b2cdf,PodSandboxId:f3a45e931ffab50911732fcd494e714abe2c05f5f2d597e11721a5aa75eb1be5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737974060783562960,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qd8sr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0bd4e24f-ff7a-44ab-88fe-dd2e31e58c04,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0f12e823745126c800b6a6098dc8da2ab1d04a3817c3c81d0a54079195d5a4d7,PodSandboxId:a5a84b450ef7417e0f21e3b759f8ef7db2727e05c13457dafac55bc8bf6f4d75,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737974044034620105,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vvrbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80bcb9df-f731-4d0e-84a0-dffa3ef19beb,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35a3081037ae0e5abd384c6cf23eb406d66cfcfd927ef330f2e18150934a1c7,PodSandboxId:972e1dc5a0c7e2e617dc07cbe16f80b25b8b1d25786a272ee9437573a5ea848a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737974038798348334,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t267v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88a96863-a200-4332-8b25-da14f0bd6a5b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54611886a789912b64e88bf3d65d32ff8e27ed7b8db2bbcd766a776e251180e9,PodSandboxId:2fffc58411b70130e971f0b0ed55bdcf98952e87e347b865ba643ef8d1d68089,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737974012304257947,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed18e2f-a7dd-42af-a661-badafdabdb84,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6950ed8930b93aa829552c3a07bc5ec7d8d1d127245adcee53b2cc9dff92a051,PodSandboxId:fd0d5df91514c79fa4d80c82be8be8ded4c8ff724500f9039471fdba81a4fa78,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737974005181569838,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-n8hnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8e9a84-0604-42e0-a2ba-855e1352ac7b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d445ef49581e8d23332a68f7a6311618fcb48dc2a3dad0a71552cb96f9b8653,PodSandboxId:a9e47807d91faa073d5d299434478c7a2b0394b20d4297118bbf9fc79dbccd05,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737973987412681462,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99529af-ac0e-4e45-969f-8d44c1b4877e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a113cc7636ba124ab4ee7d4686c616aa0e4178819e581422385f8c9108b2ed79,PodSandboxId:7bb138b4cecc1c9683ad69940a70d0e6582832ffe88c43fc964620fb79e36118,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737973987361230444,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7tqw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643e9d5f-72da-4abc-9f0b-df4142b99b65,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:171301b3d9d6bc2e22b2d8568a1d201201ad84d01f5bf6c06d2eff00abbbdd00,PodSandboxId:3bb5859198d16733afb9b34dc9cc9cad0fcf5d04419db83340fb27e886d27a6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737973983649843976,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4pggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94ffd4e7-a667-4191-959e-3b1d29c5c3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b98aaa5ba855f8a1d1458ff13cc
998fd8efb705305efe1128f9ff321d959265,PodSandboxId:27d7f27acfcbf02854f59d2602292efcb98e612c75daaaddac98722c22eaefb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737973972624883129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f5b8e106beef1aa022041115aeed93c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35c210b609f92b11bc0a229321f2d3f140c981b37ee18aa38f5fdb40fc84a83,PodSandbox
Id:e347b1db653582b11dc29525e0d19bb6a0210585efdd430fe0c05a8f91b38124,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737973972611742630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b925762dbf0409e51fe4e3d09d993e43,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e16adf1add8403eccbc7adff1e26d2e70a14352e1f8f802b3230f5ac76ffb0,P
odSandboxId:c1f911fea2ed7e80495626e12f0b694593a1e0016eb5e2416aeb075b5b9fb631,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737973972631333872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24872a2276edd2ca9dbb4a57d1346a7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069ac489954e475c7ee91d39eb8df0a4de462d1805b143170099b291b56a460f,PodSandboxId:2a7fc
a42d313b4da688b6d11a576f4f1b722469bb539df548cf5a471ae1ddabc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737973972542188569,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-952541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c63bce282aa77d92ef9c27a7f767c90,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef9d7d72-71bf-4c51-b420-ba3e1893748d name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1626d1659c7c       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   004417e4847f8       nginx
	0bff083babac4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   b016bd5580c3c       busybox
	ee7ec3ecdbb23       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   f3a45e931ffab       ingress-nginx-controller-56d7c84fd4-qd8sr
	0f12e82374512       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   a5a84b450ef74       ingress-nginx-admission-patch-vvrbd
	a35a3081037ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   972e1dc5a0c7e       ingress-nginx-admission-create-t267v
	54611886a7899       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   2fffc58411b70       kube-ingress-dns-minikube
	6950ed8930b93       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   fd0d5df91514c       amd-gpu-device-plugin-n8hnv
	0d445ef49581e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a9e47807d91fa       storage-provisioner
	a113cc7636ba1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   7bb138b4cecc1       coredns-668d6bf9bc-7tqw5
	171301b3d9d6b       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   3bb5859198d16       kube-proxy-4pggj
	d5e16adf1add8       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   c1f911fea2ed7       kube-scheduler-addons-952541
	5b98aaa5ba855       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   27d7f27acfcbf       etcd-addons-952541
	a35c210b609f9       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   e347b1db65358       kube-controller-manager-addons-952541
	069ac489954e4       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   2a7fca42d313b       kube-apiserver-addons-952541
	
	
	==> coredns [a113cc7636ba124ab4ee7d4686c616aa0e4178819e581422385f8c9108b2ed79] <==
	[INFO] 10.244.0.7:54159 - 57473 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000452431s
	[INFO] 10.244.0.7:54159 - 59531 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000174153s
	[INFO] 10.244.0.7:54159 - 3737 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122425s
	[INFO] 10.244.0.7:54159 - 52298 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000110763s
	[INFO] 10.244.0.7:54159 - 40688 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097645s
	[INFO] 10.244.0.7:54159 - 32768 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000273013s
	[INFO] 10.244.0.7:54159 - 28713 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000682509s
	[INFO] 10.244.0.7:44394 - 36597 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000147941s
	[INFO] 10.244.0.7:44394 - 36314 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000269074s
	[INFO] 10.244.0.7:50297 - 50209 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124961s
	[INFO] 10.244.0.7:50297 - 50000 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099187s
	[INFO] 10.244.0.7:49442 - 34646 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132255s
	[INFO] 10.244.0.7:49442 - 34866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009376s
	[INFO] 10.244.0.7:38853 - 19436 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099083s
	[INFO] 10.244.0.7:38853 - 19635 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000296376s
	[INFO] 10.244.0.23:34743 - 51227 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001155234s
	[INFO] 10.244.0.23:53968 - 12030 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000109367s
	[INFO] 10.244.0.23:53596 - 12356 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106532s
	[INFO] 10.244.0.23:48914 - 15821 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100561s
	[INFO] 10.244.0.23:32898 - 12727 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085486s
	[INFO] 10.244.0.23:43389 - 39775 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000055336s
	[INFO] 10.244.0.23:41812 - 2512 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000858975s
	[INFO] 10.244.0.23:53095 - 49640 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001111264s
	[INFO] 10.244.0.27:44154 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000277179s
	[INFO] 10.244.0.27:58979 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145147s
	
	
	==> describe nodes <==
	Name:               addons-952541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-952541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=addons-952541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T10_32_57_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-952541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 10:32:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-952541
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 10:37:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 10:35:30 +0000   Mon, 27 Jan 2025 10:32:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 10:35:30 +0000   Mon, 27 Jan 2025 10:32:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 10:35:30 +0000   Mon, 27 Jan 2025 10:32:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 10:35:30 +0000   Mon, 27 Jan 2025 10:32:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    addons-952541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 38b9fb30633c4197b880321175830465
	  System UUID:                38b9fb30-633c-4197-b880-321175830465
	  Boot ID:                    7e638802-1e7c-4948-9ca9-bf98bf53ef3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-7d9564db4-mwrqx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-qd8sr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m26s
	  kube-system                 amd-gpu-device-plugin-n8hnv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 coredns-668d6bf9bc-7tqw5                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m33s
	  kube-system                 etcd-addons-952541                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m38s
	  kube-system                 kube-apiserver-addons-952541                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-addons-952541        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-proxy-4pggj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-scheduler-addons-952541                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m44s)  kubelet          Node addons-952541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m44s)  kubelet          Node addons-952541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m44s)  kubelet          Node addons-952541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m38s                  kubelet          Node addons-952541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s                  kubelet          Node addons-952541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s                  kubelet          Node addons-952541 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m37s                  kubelet          Node addons-952541 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node addons-952541 event: Registered Node addons-952541 in Controller
	
	
	==> dmesg <==
	[  +0.081920] kauditd_printk_skb: 69 callbacks suppressed
	[Jan27 10:33] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.682835] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.117926] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.028212] kauditd_printk_skb: 151 callbacks suppressed
	[  +7.982684] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.783316] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.687421] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.510175] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.300174] kauditd_printk_skb: 5 callbacks suppressed
	[Jan27 10:34] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.874661] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.465608] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.596452] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.282355] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.727817] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.426574] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.057287] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.659596] kauditd_printk_skb: 43 callbacks suppressed
	[Jan27 10:35] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.399958] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.204250] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.453931] kauditd_printk_skb: 10 callbacks suppressed
	[ +20.523495] kauditd_printk_skb: 15 callbacks suppressed
	[Jan27 10:37] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5b98aaa5ba855f8a1d1458ff13cc998fd8efb705305efe1128f9ff321d959265] <==
	{"level":"info","ts":"2025-01-27T10:34:20.055028Z","caller":"traceutil/trace.go:171","msg":"trace[1558911758] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"411.312459ms","start":"2025-01-27T10:34:19.643701Z","end":"2025-01-27T10:34:20.055014Z","steps":["trace[1558911758] 'process raft request'  (duration: 410.970898ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:34:20.055090Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.951891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:34:20.055116Z","caller":"traceutil/trace.go:171","msg":"trace[2059156124] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"324.019951ms","start":"2025-01-27T10:34:19.731090Z","end":"2025-01-27T10:34:20.055110Z","steps":["trace[2059156124] 'agreement among raft nodes before linearized reading'  (duration: 323.927934ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:34:20.055136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T10:34:19.731073Z","time spent":"324.056981ms","remote":"127.0.0.1:52208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T10:34:20.055154Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T10:34:19.643684Z","time spent":"411.373705ms","remote":"127.0.0.1:52302","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-952541\" mod_revision:1076 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-952541\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-952541\" > >"}
	{"level":"warn","ts":"2025-01-27T10:34:20.055361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.939854ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:34:20.055390Z","caller":"traceutil/trace.go:171","msg":"trace[2112917932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1128; }","duration":"278.973319ms","start":"2025-01-27T10:34:19.776408Z","end":"2025-01-27T10:34:20.055381Z","steps":["trace[2112917932] 'agreement among raft nodes before linearized reading'  (duration: 278.925568ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:34:20.055744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.429716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:34:20.055772Z","caller":"traceutil/trace.go:171","msg":"trace[22722058] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1128; }","duration":"259.460961ms","start":"2025-01-27T10:34:19.796303Z","end":"2025-01-27T10:34:20.055764Z","steps":["trace[22722058] 'agreement among raft nodes before linearized reading'  (duration: 259.417611ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T10:34:54.753207Z","caller":"traceutil/trace.go:171","msg":"trace[320841902] linearizableReadLoop","detail":"{readStateIndex:1394; appliedIndex:1393; }","duration":"352.421932ms","start":"2025-01-27T10:34:54.400769Z","end":"2025-01-27T10:34:54.753190Z","steps":["trace[320841902] 'read index received'  (duration: 352.263351ms)","trace[320841902] 'applied index is now lower than readState.Index'  (duration: 157.946µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T10:34:54.753489Z","caller":"traceutil/trace.go:171","msg":"trace[340929977] transaction","detail":"{read_only:false; response_revision:1350; number_of_response:1; }","duration":"363.246923ms","start":"2025-01-27T10:34:54.390226Z","end":"2025-01-27T10:34:54.753473Z","steps":["trace[340929977] 'process raft request'  (duration: 362.852528ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:34:54.753603Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T10:34:54.390214Z","time spent":"363.324254ms","remote":"127.0.0.1:52194","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1345 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T10:34:54.753772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.996504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:34:54.753867Z","caller":"traceutil/trace.go:171","msg":"trace[452790215] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1350; }","duration":"353.109ms","start":"2025-01-27T10:34:54.400747Z","end":"2025-01-27T10:34:54.753856Z","steps":["trace[452790215] 'agreement among raft nodes before linearized reading'  (duration: 352.995533ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:34:54.753913Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T10:34:54.400734Z","time spent":"353.168374ms","remote":"127.0.0.1:52208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T10:34:54.754081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.530153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:34:54.754126Z","caller":"traceutil/trace.go:171","msg":"trace[187740431] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1350; }","duration":"150.576219ms","start":"2025-01-27T10:34:54.603541Z","end":"2025-01-27T10:34:54.754117Z","steps":["trace[187740431] 'agreement among raft nodes before linearized reading'  (duration: 150.513553ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:34:54.756459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.476135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2025-01-27T10:34:54.756508Z","caller":"traceutil/trace.go:171","msg":"trace[651905257] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1350; }","duration":"104.545978ms","start":"2025-01-27T10:34:54.651952Z","end":"2025-01-27T10:34:54.756498Z","steps":["trace[651905257] 'agreement among raft nodes before linearized reading'  (duration: 104.425138ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:35:12.231219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.306324ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11042145307742757469 > lease_revoke:<id:193d94a75240df67>","response":"size:29"}
	{"level":"info","ts":"2025-01-27T10:35:12.231379Z","caller":"traceutil/trace.go:171","msg":"trace[1448638606] linearizableReadLoop","detail":"{readStateIndex:1633; appliedIndex:1632; }","duration":"219.707111ms","start":"2025-01-27T10:35:12.011660Z","end":"2025-01-27T10:35:12.231367Z","steps":["trace[1448638606] 'read index received'  (duration: 90.715721ms)","trace[1448638606] 'applied index is now lower than readState.Index'  (duration: 128.990262ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T10:35:12.231509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.837208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:35:12.231662Z","caller":"traceutil/trace.go:171","msg":"trace[2085266750] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1577; }","duration":"220.016413ms","start":"2025-01-27T10:35:12.011638Z","end":"2025-01-27T10:35:12.231654Z","steps":["trace[2085266750] 'agreement among raft nodes before linearized reading'  (duration: 219.832151ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T10:35:12.231690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.409478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T10:35:12.231739Z","caller":"traceutil/trace.go:171","msg":"trace[2033609953] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1577; }","duration":"183.484267ms","start":"2025-01-27T10:35:12.048245Z","end":"2025-01-27T10:35:12.231729Z","steps":["trace[2033609953] 'agreement among raft nodes before linearized reading'  (duration: 183.404474ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:37:36 up 5 min,  0 users,  load average: 0.27, 0.89, 0.48
	Linux addons-952541 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [069ac489954e475c7ee91d39eb8df0a4de462d1805b143170099b291b56a460f] <==
	E0127 10:34:36.026532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.92:8443->192.168.39.1:43520: use of closed network connection
	E0127 10:34:36.190144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.92:8443->192.168.39.1:43536: use of closed network connection
	I0127 10:34:45.336302       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.105.79"}
	I0127 10:35:07.825658       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 10:35:08.005089       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.32.67"}
	I0127 10:35:13.817044       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 10:35:14.838563       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 10:35:19.161632       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0127 10:35:21.081241       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 10:35:41.771881       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 10:35:42.302730       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 10:35:42.303202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 10:35:42.328349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 10:35:42.328438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 10:35:42.360991       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 10:35:42.361089       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 10:35:42.370672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 10:35:42.370742       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 10:35:43.329821       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	E0127 10:35:43.335764       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	E0127 10:35:43.358911       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W0127 10:35:43.370948       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0127 10:35:43.382993       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W0127 10:35:43.489099       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0127 10:37:34.566984       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.150.86"}
	
	
	==> kube-controller-manager [a35c210b609f92b11bc0a229321f2d3f140c981b37ee18aa38f5fdb40fc84a83] <==
	W0127 10:36:25.398308       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 10:36:25.398346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 10:36:37.886166       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 10:36:37.887045       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 10:36:37.887763       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 10:36:37.887865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 10:36:54.866293       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 10:36:54.867366       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 10:36:54.868578       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 10:36:54.868655       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 10:37:06.584957       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 10:37:06.586064       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 10:37:06.586967       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 10:37:06.587008       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 10:37:08.638329       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 10:37:08.639292       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 10:37:08.640131       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 10:37:08.640197       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 10:37:14.003923       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 10:37:14.004958       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 10:37:14.005782       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 10:37:14.005867       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 10:37:34.389505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="25.633963ms"
	I0127 10:37:34.420014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="30.386904ms"
	I0127 10:37:34.420209       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="77.409µs"
	
	
	==> kube-proxy [171301b3d9d6bc2e22b2d8568a1d201201ad84d01f5bf6c06d2eff00abbbdd00] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 10:33:04.460034       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 10:33:04.473303       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	E0127 10:33:04.473372       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 10:33:04.560561       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 10:33:04.560606       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 10:33:04.560631       1 server_linux.go:170] "Using iptables Proxier"
	I0127 10:33:04.568061       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 10:33:04.568276       1 server.go:497] "Version info" version="v1.32.1"
	I0127 10:33:04.568303       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 10:33:04.580106       1 config.go:199] "Starting service config controller"
	I0127 10:33:04.580127       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 10:33:04.580149       1 config.go:105] "Starting endpoint slice config controller"
	I0127 10:33:04.580152       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 10:33:04.580550       1 config.go:329] "Starting node config controller"
	I0127 10:33:04.580557       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 10:33:04.681954       1 shared_informer.go:320] Caches are synced for node config
	I0127 10:33:04.682004       1 shared_informer.go:320] Caches are synced for service config
	I0127 10:33:04.682017       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d5e16adf1add8403eccbc7adff1e26d2e70a14352e1f8f802b3230f5ac76ffb0] <==
	W0127 10:32:54.879299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 10:32:54.879335       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:54.879424       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 10:32:54.879455       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.730067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 10:32:55.730103       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.795053       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 10:32:55.795104       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.800156       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 10:32:55.800227       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.826390       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 10:32:55.826433       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.890305       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 10:32:55.890361       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.891311       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 10:32:55.891353       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:55.935503       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 10:32:55.935548       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:56.026790       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 10:32:56.026866       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 10:32:56.052726       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 10:32:56.052771       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 10:32:56.095018       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 10:32:56.095062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 10:32:59.249733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 10:36:57 addons-952541 kubelet[1223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 10:36:57 addons-952541 kubelet[1223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 10:36:57 addons-952541 kubelet[1223]: E0127 10:36:57.356342    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974217355775548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:36:57 addons-952541 kubelet[1223]: E0127 10:36:57.356381    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974217355775548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:02 addons-952541 kubelet[1223]: I0127 10:37:02.156194    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 10:37:07 addons-952541 kubelet[1223]: E0127 10:37:07.358554    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974227357882395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:07 addons-952541 kubelet[1223]: E0127 10:37:07.358851    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974227357882395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:17 addons-952541 kubelet[1223]: E0127 10:37:17.360964    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974237360602627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:17 addons-952541 kubelet[1223]: E0127 10:37:17.361004    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974237360602627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:20 addons-952541 kubelet[1223]: I0127 10:37:20.155388    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-n8hnv" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 10:37:27 addons-952541 kubelet[1223]: E0127 10:37:27.364557    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974247363947543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:27 addons-952541 kubelet[1223]: E0127 10:37:27.365013    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737974247363947543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.393987    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc0cf5da-eeda-49b4-ba83-b701b60f9245" containerName="csi-provisioner"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394369    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="e92f3955-16cc-4f5f-b3e6-ebc3821b88b5" containerName="task-pv-container"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394425    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc0cf5da-eeda-49b4-ba83-b701b60f9245" containerName="hostpath"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394457    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc0cf5da-eeda-49b4-ba83-b701b60f9245" containerName="node-driver-registrar"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394488    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="88499b05-bb24-4ad9-8f28-ca68617c4f02" containerName="volume-snapshot-controller"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394519    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="b3600291-c8ea-48a5-8061-2561f65944aa" containerName="local-path-provisioner"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394554    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="e0675058-e6e7-473b-ba1f-84eec8ce2a41" containerName="csi-resizer"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394584    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc0cf5da-eeda-49b4-ba83-b701b60f9245" containerName="csi-external-health-monitor-controller"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394616    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="d0799899-c6db-49fd-8793-f29b6a04ef64" containerName="volume-snapshot-controller"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394647    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc0cf5da-eeda-49b4-ba83-b701b60f9245" containerName="liveness-probe"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394680    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc0cf5da-eeda-49b4-ba83-b701b60f9245" containerName="csi-snapshotter"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.394710    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="fc58288d-b9a6-4572-8e4b-ef94220aaefb" containerName="csi-attacher"
	Jan 27 10:37:34 addons-952541 kubelet[1223]: I0127 10:37:34.475543    1223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5h67\" (UniqueName: \"kubernetes.io/projected/daa77d73-7606-4e0f-b050-436cd9068784-kube-api-access-w5h67\") pod \"hello-world-app-7d9564db4-mwrqx\" (UID: \"daa77d73-7606-4e0f-b050-436cd9068784\") " pod="default/hello-world-app-7d9564db4-mwrqx"
	
	
	==> storage-provisioner [0d445ef49581e8d23332a68f7a6311618fcb48dc2a3dad0a71552cb96f9b8653] <==
	I0127 10:33:07.775953       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 10:33:07.824685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 10:33:07.824737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 10:33:07.872675       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 10:33:07.885243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"10d66a17-4253-46e2-be69-7c0dea8ee24d", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-952541_aa7743af-f9cb-4c8e-8299-39e08fa5b099 became leader
	I0127 10:33:07.888768       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-952541_aa7743af-f9cb-4c8e-8299-39e08fa5b099!
	I0127 10:33:07.991924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-952541_aa7743af-f9cb-4c8e-8299-39e08fa5b099!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-952541 -n addons-952541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-952541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-mwrqx ingress-nginx-admission-create-t267v ingress-nginx-admission-patch-vvrbd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-952541 describe pod hello-world-app-7d9564db4-mwrqx ingress-nginx-admission-create-t267v ingress-nginx-admission-patch-vvrbd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-952541 describe pod hello-world-app-7d9564db4-mwrqx ingress-nginx-admission-create-t267v ingress-nginx-admission-patch-vvrbd: exit status 1 (71.34242ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-mwrqx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-952541/192.168.39.92
	Start Time:       Mon, 27 Jan 2025 10:37:34 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w5h67 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w5h67:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-mwrqx to addons-952541
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t267v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vvrbd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-952541 describe pod hello-world-app-7d9564db4-mwrqx ingress-nginx-admission-create-t267v ingress-nginx-admission-patch-vvrbd: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable ingress-dns --alsologtostderr -v=1: (1.335779623s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable ingress --alsologtostderr -v=1: (7.646863777s)
--- FAIL: TestAddons/parallel/Ingress (158.22s)

                                                
                                    
x
+
TestPreload (281.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-858946 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-858946 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m5.751243668s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-858946 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-858946 image pull gcr.io/k8s-minikube/busybox: (2.324219913s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-858946
E0127 11:27:34.555400   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-858946: (1m30.960722909s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-858946 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0127 11:29:10.004286   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:29:26.924641   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-858946 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.901280547s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-858946 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-27 11:29:34.853673386 +0000 UTC m=+3453.918967427
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-858946 -n test-preload-858946
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-858946 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-751108 ssh -n                                                                 | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:13 UTC |
	|         | multinode-751108-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-751108 ssh -n multinode-751108 sudo cat                                       | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:13 UTC |
	|         | /home/docker/cp-test_multinode-751108-m03_multinode-751108.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-751108 cp multinode-751108-m03:/home/docker/cp-test.txt                       | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:13 UTC |
	|         | multinode-751108-m02:/home/docker/cp-test_multinode-751108-m03_multinode-751108-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-751108 ssh -n                                                                 | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:13 UTC |
	|         | multinode-751108-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-751108 ssh -n multinode-751108-m02 sudo cat                                   | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:13 UTC |
	|         | /home/docker/cp-test_multinode-751108-m03_multinode-751108-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-751108 node stop m03                                                          | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:13 UTC |
	| node    | multinode-751108 node start                                                             | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:13 UTC | 27 Jan 25 11:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-751108                                                                | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:14 UTC |                     |
	| stop    | -p multinode-751108                                                                     | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:14 UTC | 27 Jan 25 11:17 UTC |
	| start   | -p multinode-751108                                                                     | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:19 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-751108                                                                | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:19 UTC |                     |
	| node    | multinode-751108 node delete                                                            | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:19 UTC | 27 Jan 25 11:19 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-751108 stop                                                                   | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:19 UTC | 27 Jan 25 11:22 UTC |
	| start   | -p multinode-751108                                                                     | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:22 UTC | 27 Jan 25 11:24 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-751108                                                                | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	| start   | -p multinode-751108-m02                                                                 | multinode-751108-m02 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-751108-m03                                                                 | multinode-751108-m03 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-751108                                                                 | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC |                     |
	| delete  | -p multinode-751108-m03                                                                 | multinode-751108-m03 | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| delete  | -p multinode-751108                                                                     | multinode-751108     | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:24 UTC |
	| start   | -p test-preload-858946                                                                  | test-preload-858946  | jenkins | v1.35.0 | 27 Jan 25 11:24 UTC | 27 Jan 25 11:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-858946 image pull                                                          | test-preload-858946  | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:27 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-858946                                                                  | test-preload-858946  | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:28 UTC |
	| start   | -p test-preload-858946                                                                  | test-preload-858946  | jenkins | v1.35.0 | 27 Jan 25 11:28 UTC | 27 Jan 25 11:29 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-858946 image list                                                          | test-preload-858946  | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:29 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:28:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:28:34.784795   57674 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:28:34.784886   57674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:28:34.784894   57674 out.go:358] Setting ErrFile to fd 2...
	I0127 11:28:34.784898   57674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:28:34.785073   57674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:28:34.785600   57674 out.go:352] Setting JSON to false
	I0127 11:28:34.786447   57674 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7815,"bootTime":1737969500,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:28:34.786544   57674 start.go:139] virtualization: kvm guest
	I0127 11:28:34.788866   57674 out.go:177] * [test-preload-858946] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:28:34.790265   57674 notify.go:220] Checking for updates...
	I0127 11:28:34.790274   57674 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:28:34.791721   57674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:28:34.793087   57674 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:28:34.794305   57674 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:28:34.795596   57674 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:28:34.797054   57674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:28:34.798747   57674 config.go:182] Loaded profile config "test-preload-858946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:28:34.799120   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:28:34.799160   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:28:34.814056   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0127 11:28:34.814474   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:28:34.815030   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:28:34.815049   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:28:34.815422   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:28:34.815678   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:28:34.817481   57674 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:28:34.818718   57674 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:28:34.819020   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:28:34.819063   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:28:34.833071   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0127 11:28:34.833484   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:28:34.833899   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:28:34.833917   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:28:34.834212   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:28:34.834437   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:28:34.868889   57674 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:28:34.870186   57674 start.go:297] selected driver: kvm2
	I0127 11:28:34.870199   57674 start.go:901] validating driver "kvm2" against &{Name:test-preload-858946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-858946
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:28:34.870283   57674 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:28:34.870954   57674 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:28:34.871025   57674 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:28:34.885994   57674 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:28:34.886344   57674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:28:34.886373   57674 cni.go:84] Creating CNI manager for ""
	I0127 11:28:34.886417   57674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:28:34.886473   57674 start.go:340] cluster config:
	{Name:test-preload-858946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-858946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:28:34.886572   57674 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:28:34.888356   57674 out.go:177] * Starting "test-preload-858946" primary control-plane node in "test-preload-858946" cluster
	I0127 11:28:34.889590   57674 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:28:34.913631   57674 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 11:28:34.913654   57674 cache.go:56] Caching tarball of preloaded images
	I0127 11:28:34.913768   57674 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:28:34.915441   57674 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0127 11:28:34.916646   57674 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 11:28:34.942899   57674 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 11:28:40.750352   57674 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 11:28:40.750449   57674 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 11:28:41.594268   57674 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0127 11:28:41.594391   57674 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/config.json ...
	I0127 11:28:41.594633   57674 start.go:360] acquireMachinesLock for test-preload-858946: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:28:41.594696   57674 start.go:364] duration metric: took 42.006µs to acquireMachinesLock for "test-preload-858946"
	I0127 11:28:41.594714   57674 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:28:41.594720   57674 fix.go:54] fixHost starting: 
	I0127 11:28:41.595022   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:28:41.595061   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:28:41.609872   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0127 11:28:41.610355   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:28:41.610845   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:28:41.610865   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:28:41.611144   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:28:41.611278   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:28:41.611440   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetState
	I0127 11:28:41.613088   57674 fix.go:112] recreateIfNeeded on test-preload-858946: state=Stopped err=<nil>
	I0127 11:28:41.613105   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	W0127 11:28:41.613229   57674 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:28:41.615177   57674 out.go:177] * Restarting existing kvm2 VM for "test-preload-858946" ...
	I0127 11:28:41.616395   57674 main.go:141] libmachine: (test-preload-858946) Calling .Start
	I0127 11:28:41.616541   57674 main.go:141] libmachine: (test-preload-858946) starting domain...
	I0127 11:28:41.616561   57674 main.go:141] libmachine: (test-preload-858946) ensuring networks are active...
	I0127 11:28:41.617238   57674 main.go:141] libmachine: (test-preload-858946) Ensuring network default is active
	I0127 11:28:41.617647   57674 main.go:141] libmachine: (test-preload-858946) Ensuring network mk-test-preload-858946 is active
	I0127 11:28:41.618045   57674 main.go:141] libmachine: (test-preload-858946) getting domain XML...
	I0127 11:28:41.618694   57674 main.go:141] libmachine: (test-preload-858946) creating domain...
	I0127 11:28:42.808151   57674 main.go:141] libmachine: (test-preload-858946) waiting for IP...
	I0127 11:28:42.809017   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:42.809401   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:42.809486   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:42.809404   57743 retry.go:31] will retry after 219.133796ms: waiting for domain to come up
	I0127 11:28:43.029821   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:43.030273   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:43.030313   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:43.030232   57743 retry.go:31] will retry after 295.12732ms: waiting for domain to come up
	I0127 11:28:43.326634   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:43.327077   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:43.327102   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:43.327042   57743 retry.go:31] will retry after 420.112861ms: waiting for domain to come up
	I0127 11:28:43.748593   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:43.749042   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:43.749065   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:43.748987   57743 retry.go:31] will retry after 526.771904ms: waiting for domain to come up
	I0127 11:28:44.277771   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:44.278220   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:44.278249   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:44.278196   57743 retry.go:31] will retry after 666.73251ms: waiting for domain to come up
	I0127 11:28:44.946002   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:44.946476   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:44.946507   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:44.946446   57743 retry.go:31] will retry after 876.840145ms: waiting for domain to come up
	I0127 11:28:45.824636   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:45.825019   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:45.825048   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:45.824972   57743 retry.go:31] will retry after 763.721975ms: waiting for domain to come up
	I0127 11:28:46.589794   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:46.590218   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:46.590243   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:46.590201   57743 retry.go:31] will retry after 1.402933853s: waiting for domain to come up
	I0127 11:28:47.995238   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:47.995707   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:47.995754   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:47.995695   57743 retry.go:31] will retry after 1.344635477s: waiting for domain to come up
	I0127 11:28:49.342177   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:49.342570   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:49.342611   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:49.342548   57743 retry.go:31] will retry after 2.011958903s: waiting for domain to come up
	I0127 11:28:51.356686   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:51.357117   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:51.357144   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:51.357093   57743 retry.go:31] will retry after 2.171289737s: waiting for domain to come up
	I0127 11:28:53.530283   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:53.530718   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:53.530791   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:53.530697   57743 retry.go:31] will retry after 3.32575039s: waiting for domain to come up
	I0127 11:28:56.857729   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:28:56.858133   57674 main.go:141] libmachine: (test-preload-858946) DBG | unable to find current IP address of domain test-preload-858946 in network mk-test-preload-858946
	I0127 11:28:56.858156   57674 main.go:141] libmachine: (test-preload-858946) DBG | I0127 11:28:56.858096   57743 retry.go:31] will retry after 3.202957885s: waiting for domain to come up
	I0127 11:29:00.064644   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.065123   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has current primary IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.065144   57674 main.go:141] libmachine: (test-preload-858946) found domain IP: 192.168.39.61
	I0127 11:29:00.065154   57674 main.go:141] libmachine: (test-preload-858946) reserving static IP address...
	I0127 11:29:00.065737   57674 main.go:141] libmachine: (test-preload-858946) reserved static IP address 192.168.39.61 for domain test-preload-858946
	I0127 11:29:00.065760   57674 main.go:141] libmachine: (test-preload-858946) waiting for SSH...
	I0127 11:29:00.065785   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "test-preload-858946", mac: "52:54:00:41:cb:4f", ip: "192.168.39.61"} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.065802   57674 main.go:141] libmachine: (test-preload-858946) DBG | skip adding static IP to network mk-test-preload-858946 - found existing host DHCP lease matching {name: "test-preload-858946", mac: "52:54:00:41:cb:4f", ip: "192.168.39.61"}
	I0127 11:29:00.065810   57674 main.go:141] libmachine: (test-preload-858946) DBG | Getting to WaitForSSH function...
	I0127 11:29:00.068091   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.068445   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.068476   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.068650   57674 main.go:141] libmachine: (test-preload-858946) DBG | Using SSH client type: external
	I0127 11:29:00.068680   57674 main.go:141] libmachine: (test-preload-858946) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa (-rw-------)
	I0127 11:29:00.068712   57674 main.go:141] libmachine: (test-preload-858946) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:29:00.068724   57674 main.go:141] libmachine: (test-preload-858946) DBG | About to run SSH command:
	I0127 11:29:00.068736   57674 main.go:141] libmachine: (test-preload-858946) DBG | exit 0
	I0127 11:29:00.191408   57674 main.go:141] libmachine: (test-preload-858946) DBG | SSH cmd err, output: <nil>: 
	I0127 11:29:00.191731   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetConfigRaw
	I0127 11:29:00.192302   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetIP
	I0127 11:29:00.194992   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.195300   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.195324   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.195526   57674 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/config.json ...
	I0127 11:29:00.195734   57674 machine.go:93] provisionDockerMachine start ...
	I0127 11:29:00.195757   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:00.195966   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:00.198146   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.198516   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.198536   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.198661   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:00.198837   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.198973   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.199069   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:00.199185   57674 main.go:141] libmachine: Using SSH client type: native
	I0127 11:29:00.199367   57674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 11:29:00.199377   57674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:29:00.303562   57674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:29:00.303590   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetMachineName
	I0127 11:29:00.303870   57674 buildroot.go:166] provisioning hostname "test-preload-858946"
	I0127 11:29:00.303891   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetMachineName
	I0127 11:29:00.304076   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:00.306341   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.306615   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.306664   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.306752   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:00.306922   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.307062   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.307171   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:00.307326   57674 main.go:141] libmachine: Using SSH client type: native
	I0127 11:29:00.307493   57674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 11:29:00.307504   57674 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-858946 && echo "test-preload-858946" | sudo tee /etc/hostname
	I0127 11:29:00.428504   57674 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-858946
	
	I0127 11:29:00.428538   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:00.430953   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.431298   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.431319   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.431510   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:00.431691   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.431857   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.431954   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:00.432100   57674 main.go:141] libmachine: Using SSH client type: native
	I0127 11:29:00.432257   57674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 11:29:00.432273   57674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-858946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-858946/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-858946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:29:00.547419   57674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:29:00.547450   57674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:29:00.547491   57674 buildroot.go:174] setting up certificates
	I0127 11:29:00.547501   57674 provision.go:84] configureAuth start
	I0127 11:29:00.547510   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetMachineName
	I0127 11:29:00.547807   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetIP
	I0127 11:29:00.550261   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.550677   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.550712   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.550857   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:00.552867   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.553171   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.553203   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.553281   57674 provision.go:143] copyHostCerts
	I0127 11:29:00.553338   57674 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:29:00.553349   57674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:29:00.553418   57674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:29:00.553499   57674 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:29:00.553507   57674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:29:00.553544   57674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:29:00.553623   57674 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:29:00.553631   57674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:29:00.553653   57674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:29:00.553699   57674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.test-preload-858946 san=[127.0.0.1 192.168.39.61 localhost minikube test-preload-858946]
	I0127 11:29:00.680456   57674 provision.go:177] copyRemoteCerts
	I0127 11:29:00.680513   57674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:29:00.680542   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:00.683022   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.683386   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.683409   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.683569   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:00.683785   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.683933   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:00.684068   57674 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa Username:docker}
	I0127 11:29:00.765054   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:29:00.788374   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 11:29:00.810037   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:29:00.831864   57674 provision.go:87] duration metric: took 284.351348ms to configureAuth
	I0127 11:29:00.831890   57674 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:29:00.832104   57674 config.go:182] Loaded profile config "test-preload-858946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:29:00.832184   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:00.834752   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.835099   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:00.835135   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:00.835283   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:00.835467   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.835749   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:00.835910   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:00.836095   57674 main.go:141] libmachine: Using SSH client type: native
	I0127 11:29:00.836259   57674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 11:29:00.836273   57674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:29:01.051999   57674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:29:01.052028   57674 machine.go:96] duration metric: took 856.2732ms to provisionDockerMachine
	I0127 11:29:01.052043   57674 start.go:293] postStartSetup for "test-preload-858946" (driver="kvm2")
	I0127 11:29:01.052056   57674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:29:01.052078   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:01.052439   57674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:29:01.052476   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:01.055182   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.055569   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:01.055600   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.055746   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:01.055933   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:01.056076   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:01.056183   57674 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa Username:docker}
	I0127 11:29:01.137259   57674 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:29:01.141095   57674 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:29:01.141127   57674 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:29:01.141216   57674 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:29:01.141331   57674 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:29:01.141457   57674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:29:01.149911   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:29:01.171552   57674 start.go:296] duration metric: took 119.494151ms for postStartSetup
	I0127 11:29:01.171591   57674 fix.go:56] duration metric: took 19.576870214s for fixHost
	I0127 11:29:01.171634   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:01.174032   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.174475   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:01.174517   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.174678   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:01.174846   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:01.174957   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:01.175060   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:01.175179   57674 main.go:141] libmachine: Using SSH client type: native
	I0127 11:29:01.175394   57674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0127 11:29:01.175410   57674 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:29:01.279910   57674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977341.257095182
	
	I0127 11:29:01.279947   57674 fix.go:216] guest clock: 1737977341.257095182
	I0127 11:29:01.279954   57674 fix.go:229] Guest: 2025-01-27 11:29:01.257095182 +0000 UTC Remote: 2025-01-27 11:29:01.171595936 +0000 UTC m=+26.422324556 (delta=85.499246ms)
	I0127 11:29:01.279972   57674 fix.go:200] guest clock delta is within tolerance: 85.499246ms
	I0127 11:29:01.279976   57674 start.go:83] releasing machines lock for "test-preload-858946", held for 19.685269563s
	I0127 11:29:01.279994   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:01.280262   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetIP
	I0127 11:29:01.282787   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.283188   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:01.283210   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.283438   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:01.283933   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:01.284098   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:01.284203   57674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:29:01.284242   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:01.284294   57674 ssh_runner.go:195] Run: cat /version.json
	I0127 11:29:01.284319   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:01.287025   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.287283   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.287422   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:01.287452   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.287544   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:01.287723   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:01.287735   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:01.287749   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:01.287908   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:01.287926   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:01.288097   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:01.288105   57674 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa Username:docker}
	I0127 11:29:01.288202   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:01.288339   57674 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa Username:docker}
	I0127 11:29:01.389831   57674 ssh_runner.go:195] Run: systemctl --version
	I0127 11:29:01.395539   57674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:29:01.545123   57674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:29:01.550777   57674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:29:01.550833   57674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:29:01.567834   57674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:29:01.567862   57674 start.go:495] detecting cgroup driver to use...
	I0127 11:29:01.567925   57674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:29:01.583385   57674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:29:01.596819   57674 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:29:01.596878   57674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:29:01.609491   57674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:29:01.622465   57674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:29:01.727827   57674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:29:01.874191   57674 docker.go:233] disabling docker service ...
	I0127 11:29:01.874252   57674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:29:01.888786   57674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:29:01.901218   57674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:29:02.034068   57674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:29:02.152740   57674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:29:02.166090   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:29:02.183383   57674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 11:29:02.183448   57674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.193456   57674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:29:02.193508   57674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.203517   57674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.213204   57674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.223062   57674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:29:02.233044   57674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.242684   57674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.259277   57674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:29:02.269357   57674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:29:02.278543   57674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:29:02.278623   57674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:29:02.291321   57674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:29:02.300540   57674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:29:02.417897   57674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:29:02.502112   57674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:29:02.502179   57674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:29:02.506867   57674 start.go:563] Will wait 60s for crictl version
	I0127 11:29:02.506931   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:02.510501   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:29:02.552283   57674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:29:02.552360   57674 ssh_runner.go:195] Run: crio --version
	I0127 11:29:02.579472   57674 ssh_runner.go:195] Run: crio --version
	I0127 11:29:02.608285   57674 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0127 11:29:02.609742   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetIP
	I0127 11:29:02.612216   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:02.612502   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:02.612535   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:02.612744   57674 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 11:29:02.616848   57674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:29:02.628567   57674 kubeadm.go:883] updating cluster {Name:test-preload-858946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-858946 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:29:02.628666   57674 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:29:02.628706   57674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:29:02.661121   57674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 11:29:02.661191   57674 ssh_runner.go:195] Run: which lz4
	I0127 11:29:02.664853   57674 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:29:02.668668   57674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:29:02.668703   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0127 11:29:04.001328   57674 crio.go:462] duration metric: took 1.336497285s to copy over tarball
	I0127 11:29:04.001414   57674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:29:06.295098   57674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.29365274s)
	I0127 11:29:06.295129   57674 crio.go:469] duration metric: took 2.293767144s to extract the tarball
	I0127 11:29:06.295138   57674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:29:06.334547   57674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:29:06.371963   57674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 11:29:06.371984   57674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:29:06.372051   57674 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:29:06.372063   57674 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:06.372092   57674 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:06.372111   57674 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.372137   57674 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.372164   57674 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 11:29:06.372195   57674 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:06.372229   57674 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.373433   57674 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.373441   57674 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:29:06.373452   57674 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:06.373434   57674 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 11:29:06.373434   57674 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.373477   57674 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.373488   57674 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:06.373505   57674 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:06.516534   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.519498   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.526367   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 11:29:06.527273   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.533700   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:06.540633   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:06.544005   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:06.609480   57674 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 11:29:06.609531   57674 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.609569   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.666944   57674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0127 11:29:06.666986   57674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.667034   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.681864   57674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0127 11:29:06.681899   57674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.681936   57674 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 11:29:06.681976   57674 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 11:29:06.682018   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.681947   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.698849   57674 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 11:29:06.698889   57674 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:06.698887   57674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0127 11:29:06.698919   57674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:06.698931   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.698857   57674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0127 11:29:06.698964   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.698979   57674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:06.698995   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.698930   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.699023   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.698954   57674 ssh_runner.go:195] Run: which crictl
	I0127 11:29:06.699058   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:29:06.723059   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:06.797627   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:06.797789   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:29:06.797806   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.797878   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:06.797885   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.797920   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.833566   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:06.938415   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:06.938487   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:29:06.940310   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:29:06.940453   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:29:06.940520   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:29:06.940520   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:06.944184   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:29:07.064099   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:29:07.064101   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 11:29:07.064226   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 11:29:07.079852   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 11:29:07.079926   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 11:29:07.079960   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:29:07.079994   57674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:29:07.080019   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:29:07.080057   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 11:29:07.080115   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:29:07.084048   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 11:29:07.084136   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:29:07.131592   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0127 11:29:07.131626   57674 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 11:29:07.131680   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 11:29:07.131694   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 11:29:07.131717   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0127 11:29:07.131696   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0127 11:29:07.131764   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0127 11:29:07.131810   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:29:07.132288   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0127 11:29:07.132324   57674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 11:29:07.132394   57674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:29:07.336042   57674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:29:09.890774   57674 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.758934984s)
	I0127 11:29:09.890819   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0127 11:29:09.890820   57674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.759121088s)
	I0127 11:29:09.890841   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 11:29:09.890863   57674 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:29:09.890866   57674 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.75845681s)
	I0127 11:29:09.890881   57674 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0127 11:29:09.890909   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:29:09.890924   57674 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.554843269s)
	I0127 11:29:10.538195   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 11:29:10.538239   57674 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:29:10.538296   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:29:10.983954   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 11:29:10.984005   57674 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:29:10.984060   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:29:11.333489   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 11:29:11.333533   57674 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:29:11.333585   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:29:13.380218   57674 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.04661099s)
	I0127 11:29:13.380252   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 11:29:13.380290   57674 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:29:13.380356   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:29:14.222487   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 11:29:14.222554   57674 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:29:14.222633   57674 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:29:14.967548   57674 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 11:29:14.967585   57674 cache_images.go:123] Successfully loaded all cached images
	I0127 11:29:14.967590   57674 cache_images.go:92] duration metric: took 8.59559433s to LoadCachedImages
	I0127 11:29:14.967601   57674 kubeadm.go:934] updating node { 192.168.39.61 8443 v1.24.4 crio true true} ...
	I0127 11:29:14.967762   57674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-858946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-858946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:29:14.967832   57674 ssh_runner.go:195] Run: crio config
	I0127 11:29:15.009078   57674 cni.go:84] Creating CNI manager for ""
	I0127 11:29:15.009102   57674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:29:15.009114   57674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:29:15.009131   57674 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-858946 NodeName:test-preload-858946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:29:15.009285   57674 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-858946"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:29:15.009352   57674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 11:29:15.018523   57674 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:29:15.018584   57674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:29:15.027035   57674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0127 11:29:15.042622   57674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:29:15.057809   57674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0127 11:29:15.073213   57674 ssh_runner.go:195] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0127 11:29:15.076757   57674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:29:15.088504   57674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:29:15.192495   57674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:29:15.206730   57674 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946 for IP: 192.168.39.61
	I0127 11:29:15.206746   57674 certs.go:194] generating shared ca certs ...
	I0127 11:29:15.206760   57674 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:29:15.206887   57674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:29:15.206924   57674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:29:15.206934   57674 certs.go:256] generating profile certs ...
	I0127 11:29:15.207003   57674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/client.key
	I0127 11:29:15.207056   57674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/apiserver.key.fb58383a
	I0127 11:29:15.207094   57674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/proxy-client.key
	I0127 11:29:15.207230   57674 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:29:15.207318   57674 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:29:15.207336   57674 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:29:15.207363   57674 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:29:15.207386   57674 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:29:15.207417   57674 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:29:15.207469   57674 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:29:15.208225   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:29:15.237792   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:29:15.262815   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:29:15.286052   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:29:15.326449   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 11:29:15.358421   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:29:15.389970   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:29:15.412545   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:29:15.433762   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:29:15.454680   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:29:15.475468   57674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:29:15.496662   57674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:29:15.512284   57674 ssh_runner.go:195] Run: openssl version
	I0127 11:29:15.517525   57674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:29:15.533230   57674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:29:15.537776   57674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:29:15.537821   57674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:29:15.543083   57674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:29:15.552634   57674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:29:15.561966   57674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:29:15.565890   57674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:29:15.565928   57674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:29:15.570931   57674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:29:15.580361   57674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:29:15.590307   57674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:29:15.594255   57674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:29:15.594297   57674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:29:15.599441   57674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:29:15.609237   57674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:29:15.613199   57674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:29:15.618433   57674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:29:15.623567   57674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:29:15.628645   57674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:29:15.633740   57674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:29:15.638962   57674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:29:15.644222   57674 kubeadm.go:392] StartCluster: {Name:test-preload-858946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-858946 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:29:15.644295   57674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:29:15.644334   57674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:29:15.677563   57674 cri.go:89] found id: ""
	I0127 11:29:15.677653   57674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:29:15.686850   57674 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:29:15.686869   57674 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:29:15.686911   57674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:29:15.695401   57674 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:29:15.695852   57674 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-858946" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:29:15.695983   57674 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-858946" cluster setting kubeconfig missing "test-preload-858946" context setting]
	I0127 11:29:15.696313   57674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:29:15.696940   57674 kapi.go:59] client config for test-preload-858946: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/client.crt", KeyFile:"/home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/client.key", CAFile:"/home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 11:29:15.697547   57674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:29:15.706071   57674 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.61
	I0127 11:29:15.706096   57674 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:29:15.706109   57674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:29:15.706160   57674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:29:15.738903   57674 cri.go:89] found id: ""
	I0127 11:29:15.738965   57674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:29:15.754385   57674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:29:15.763031   57674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:29:15.763046   57674 kubeadm.go:157] found existing configuration files:
	
	I0127 11:29:15.763084   57674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:29:15.771228   57674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:29:15.771277   57674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:29:15.779630   57674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:29:15.788446   57674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:29:15.788495   57674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:29:15.796716   57674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:29:15.804658   57674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:29:15.804704   57674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:29:15.812962   57674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:29:15.820747   57674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:29:15.820782   57674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:29:15.828850   57674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:29:15.837071   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:29:15.919217   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:29:16.578335   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:29:16.819570   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:29:16.880907   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:29:16.980943   57674 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:29:16.981033   57674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:29:17.481271   57674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:29:17.982106   57674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:29:18.001050   57674 api_server.go:72] duration metric: took 1.020104988s to wait for apiserver process to appear ...
	I0127 11:29:18.001074   57674 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:29:18.001092   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:18.001547   57674 api_server.go:269] stopped: https://192.168.39.61:8443/healthz: Get "https://192.168.39.61:8443/healthz": dial tcp 192.168.39.61:8443: connect: connection refused
	I0127 11:29:18.501222   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:21.640015   57674 api_server.go:279] https://192.168.39.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:29:21.640047   57674 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:29:21.640065   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:21.671661   57674 api_server.go:279] https://192.168.39.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:29:21.671687   57674 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:29:22.001159   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:22.014871   57674 api_server.go:279] https://192.168.39.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:29:22.014903   57674 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:29:22.501526   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:22.507384   57674 api_server.go:279] https://192.168.39.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:29:22.507418   57674 api_server.go:103] status: https://192.168.39.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:29:23.002154   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:23.007731   57674 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0127 11:29:23.014061   57674 api_server.go:141] control plane version: v1.24.4
	I0127 11:29:23.014085   57674 api_server.go:131] duration metric: took 5.013004582s to wait for apiserver health ...
	I0127 11:29:23.014094   57674 cni.go:84] Creating CNI manager for ""
	I0127 11:29:23.014100   57674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:29:23.015632   57674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:29:23.016998   57674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:29:23.027896   57674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:29:23.055351   57674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:29:23.055443   57674 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 11:29:23.055474   57674 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 11:29:23.065303   57674 system_pods.go:59] 7 kube-system pods found
	I0127 11:29:23.065341   57674 system_pods.go:61] "coredns-6d4b75cb6d-b4r6x" [5146f60d-fed8-4e0f-8cb0-850c7e0d58f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:29:23.065353   57674 system_pods.go:61] "etcd-test-preload-858946" [c039f7d2-9637-4332-8703-5521cf21e4e3] Running
	I0127 11:29:23.065359   57674 system_pods.go:61] "kube-apiserver-test-preload-858946" [f3455eee-5273-4433-bd6a-f930e9d76a44] Running
	I0127 11:29:23.065365   57674 system_pods.go:61] "kube-controller-manager-test-preload-858946" [86671a68-bf0e-44fe-be4a-f454e7255734] Running
	I0127 11:29:23.065376   57674 system_pods.go:61] "kube-proxy-47vvb" [d4b3bfa2-d076-4da9-8a4d-bbea46939a32] Running
	I0127 11:29:23.065387   57674 system_pods.go:61] "kube-scheduler-test-preload-858946" [27f12074-2992-42ab-8171-a6ba27914e1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:29:23.065398   57674 system_pods.go:61] "storage-provisioner" [0350a8de-2ecf-4dff-8fd4-f2300a89af77] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 11:29:23.065408   57674 system_pods.go:74] duration metric: took 10.031845ms to wait for pod list to return data ...
	I0127 11:29:23.065420   57674 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:29:23.070248   57674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:29:23.070268   57674 node_conditions.go:123] node cpu capacity is 2
	I0127 11:29:23.070277   57674 node_conditions.go:105] duration metric: took 4.853018ms to run NodePressure ...
	I0127 11:29:23.070294   57674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:29:23.290827   57674 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 11:29:23.294843   57674 kubeadm.go:739] kubelet initialised
	I0127 11:29:23.294863   57674 kubeadm.go:740] duration metric: took 4.014845ms waiting for restarted kubelet to initialise ...
	I0127 11:29:23.294872   57674 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:29:23.299073   57674 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:23.303679   57674 pod_ready.go:98] node "test-preload-858946" hosting pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.303701   57674 pod_ready.go:82] duration metric: took 4.605604ms for pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace to be "Ready" ...
	E0127 11:29:23.303709   57674 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-858946" hosting pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.303715   57674 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:23.309133   57674 pod_ready.go:98] node "test-preload-858946" hosting pod "etcd-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.309152   57674 pod_ready.go:82] duration metric: took 5.43049ms for pod "etcd-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	E0127 11:29:23.309159   57674 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-858946" hosting pod "etcd-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.309165   57674 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:23.314062   57674 pod_ready.go:98] node "test-preload-858946" hosting pod "kube-apiserver-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.314104   57674 pod_ready.go:82] duration metric: took 4.925244ms for pod "kube-apiserver-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	E0127 11:29:23.314115   57674 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-858946" hosting pod "kube-apiserver-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.314129   57674 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:23.458653   57674 pod_ready.go:98] node "test-preload-858946" hosting pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.458720   57674 pod_ready.go:82] duration metric: took 144.571079ms for pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	E0127 11:29:23.458744   57674 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-858946" hosting pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.458754   57674 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-47vvb" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:23.859295   57674 pod_ready.go:98] node "test-preload-858946" hosting pod "kube-proxy-47vvb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.859336   57674 pod_ready.go:82] duration metric: took 400.568135ms for pod "kube-proxy-47vvb" in "kube-system" namespace to be "Ready" ...
	E0127 11:29:23.859349   57674 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-858946" hosting pod "kube-proxy-47vvb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:23.859358   57674 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:24.260168   57674 pod_ready.go:98] node "test-preload-858946" hosting pod "kube-scheduler-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:24.260193   57674 pod_ready.go:82] duration metric: took 400.827701ms for pod "kube-scheduler-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	E0127 11:29:24.260202   57674 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-858946" hosting pod "kube-scheduler-test-preload-858946" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:24.260212   57674 pod_ready.go:39] duration metric: took 965.329354ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:29:24.260228   57674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:29:24.271485   57674 ops.go:34] apiserver oom_adj: -16
	I0127 11:29:24.271502   57674 kubeadm.go:597] duration metric: took 8.58462774s to restartPrimaryControlPlane
	I0127 11:29:24.271510   57674 kubeadm.go:394] duration metric: took 8.627293556s to StartCluster
	I0127 11:29:24.271535   57674 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:29:24.271624   57674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:29:24.272313   57674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:29:24.272534   57674 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:29:24.272605   57674 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:29:24.272701   57674 addons.go:69] Setting storage-provisioner=true in profile "test-preload-858946"
	I0127 11:29:24.272723   57674 addons.go:238] Setting addon storage-provisioner=true in "test-preload-858946"
	W0127 11:29:24.272732   57674 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:29:24.272732   57674 addons.go:69] Setting default-storageclass=true in profile "test-preload-858946"
	I0127 11:29:24.272765   57674 host.go:66] Checking if "test-preload-858946" exists ...
	I0127 11:29:24.272854   57674 config.go:182] Loaded profile config "test-preload-858946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:29:24.272764   57674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-858946"
	I0127 11:29:24.273148   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:29:24.273200   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:29:24.273270   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:29:24.273310   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:29:24.275123   57674 out.go:177] * Verifying Kubernetes components...
	I0127 11:29:24.276383   57674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:29:24.287951   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I0127 11:29:24.288396   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:29:24.288921   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:29:24.288949   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:29:24.289263   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:29:24.289468   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetState
	I0127 11:29:24.289712   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0127 11:29:24.290145   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:29:24.290644   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:29:24.290675   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:29:24.291020   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:29:24.291599   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:29:24.291660   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:29:24.292196   57674 kapi.go:59] client config for test-preload-858946: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/client.crt", KeyFile:"/home/jenkins/minikube-integration/20319-18835/.minikube/profiles/test-preload-858946/client.key", CAFile:"/home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 11:29:24.292513   57674 addons.go:238] Setting addon default-storageclass=true in "test-preload-858946"
	W0127 11:29:24.292532   57674 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:29:24.292559   57674 host.go:66] Checking if "test-preload-858946" exists ...
	I0127 11:29:24.292920   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:29:24.292955   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:29:24.305442   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0127 11:29:24.305853   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:29:24.306277   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:29:24.306306   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:29:24.306436   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0127 11:29:24.306632   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:29:24.306742   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:29:24.306840   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetState
	I0127 11:29:24.307220   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:29:24.307245   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:29:24.307597   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:29:24.308106   57674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:29:24.308148   57674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:29:24.308612   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:24.310653   57674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:29:24.311870   57674 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:29:24.311885   57674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:29:24.311903   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:24.314727   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:24.315104   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:24.315123   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:24.315289   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:24.315478   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:24.315629   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:24.315771   57674 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa Username:docker}
	I0127 11:29:24.354264   57674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0127 11:29:24.354641   57674 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:29:24.355102   57674 main.go:141] libmachine: Using API Version  1
	I0127 11:29:24.355123   57674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:29:24.355434   57674 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:29:24.355653   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetState
	I0127 11:29:24.357104   57674 main.go:141] libmachine: (test-preload-858946) Calling .DriverName
	I0127 11:29:24.357275   57674 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:29:24.357288   57674 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:29:24.357300   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHHostname
	I0127 11:29:24.360171   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:24.360380   57674 main.go:141] libmachine: (test-preload-858946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:cb:4f", ip: ""} in network mk-test-preload-858946: {Iface:virbr1 ExpiryTime:2025-01-27 12:28:52 +0000 UTC Type:0 Mac:52:54:00:41:cb:4f Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:test-preload-858946 Clientid:01:52:54:00:41:cb:4f}
	I0127 11:29:24.360406   57674 main.go:141] libmachine: (test-preload-858946) DBG | domain test-preload-858946 has defined IP address 192.168.39.61 and MAC address 52:54:00:41:cb:4f in network mk-test-preload-858946
	I0127 11:29:24.360575   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHPort
	I0127 11:29:24.360749   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHKeyPath
	I0127 11:29:24.360920   57674 main.go:141] libmachine: (test-preload-858946) Calling .GetSSHUsername
	I0127 11:29:24.361050   57674 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/test-preload-858946/id_rsa Username:docker}
	I0127 11:29:24.447519   57674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:29:24.464752   57674 node_ready.go:35] waiting up to 6m0s for node "test-preload-858946" to be "Ready" ...
	I0127 11:29:24.564742   57674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:29:24.580084   57674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:29:25.470527   57674 main.go:141] libmachine: Making call to close driver server
	I0127 11:29:25.470554   57674 main.go:141] libmachine: (test-preload-858946) Calling .Close
	I0127 11:29:25.470552   57674 main.go:141] libmachine: Making call to close driver server
	I0127 11:29:25.470571   57674 main.go:141] libmachine: (test-preload-858946) Calling .Close
	I0127 11:29:25.470818   57674 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:29:25.470824   57674 main.go:141] libmachine: (test-preload-858946) DBG | Closing plugin on server side
	I0127 11:29:25.470831   57674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:29:25.470831   57674 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:29:25.470841   57674 main.go:141] libmachine: Making call to close driver server
	I0127 11:29:25.470845   57674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:29:25.470849   57674 main.go:141] libmachine: (test-preload-858946) Calling .Close
	I0127 11:29:25.470854   57674 main.go:141] libmachine: Making call to close driver server
	I0127 11:29:25.470861   57674 main.go:141] libmachine: (test-preload-858946) Calling .Close
	I0127 11:29:25.471066   57674 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:29:25.471082   57674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:29:25.471277   57674 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:29:25.471284   57674 main.go:141] libmachine: (test-preload-858946) DBG | Closing plugin on server side
	I0127 11:29:25.471293   57674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:29:25.476789   57674 main.go:141] libmachine: Making call to close driver server
	I0127 11:29:25.476803   57674 main.go:141] libmachine: (test-preload-858946) Calling .Close
	I0127 11:29:25.477060   57674 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:29:25.477078   57674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:29:25.479788   57674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:29:25.481105   57674 addons.go:514] duration metric: took 1.208508193s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:29:26.468911   57674 node_ready.go:53] node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:28.968237   57674 node_ready.go:53] node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:30.968753   57674 node_ready.go:53] node "test-preload-858946" has status "Ready":"False"
	I0127 11:29:31.968303   57674 node_ready.go:49] node "test-preload-858946" has status "Ready":"True"
	I0127 11:29:31.968331   57674 node_ready.go:38] duration metric: took 7.503550057s for node "test-preload-858946" to be "Ready" ...
	I0127 11:29:31.968341   57674 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:29:31.974268   57674 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:31.978790   57674 pod_ready.go:93] pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace has status "Ready":"True"
	I0127 11:29:31.978812   57674 pod_ready.go:82] duration metric: took 4.51834ms for pod "coredns-6d4b75cb6d-b4r6x" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:31.978831   57674 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:32.985404   57674 pod_ready.go:93] pod "etcd-test-preload-858946" in "kube-system" namespace has status "Ready":"True"
	I0127 11:29:32.985428   57674 pod_ready.go:82] duration metric: took 1.006590329s for pod "etcd-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:32.985439   57674 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:32.990147   57674 pod_ready.go:93] pod "kube-apiserver-test-preload-858946" in "kube-system" namespace has status "Ready":"True"
	I0127 11:29:32.990174   57674 pod_ready.go:82] duration metric: took 4.726272ms for pod "kube-apiserver-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:32.990186   57674 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:32.995293   57674 pod_ready.go:93] pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace has status "Ready":"True"
	I0127 11:29:32.995316   57674 pod_ready.go:82] duration metric: took 5.121982ms for pod "kube-controller-manager-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:32.995324   57674 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-47vvb" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:33.168844   57674 pod_ready.go:93] pod "kube-proxy-47vvb" in "kube-system" namespace has status "Ready":"True"
	I0127 11:29:33.168870   57674 pod_ready.go:82] duration metric: took 173.539284ms for pod "kube-proxy-47vvb" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:33.168883   57674 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:33.569172   57674 pod_ready.go:93] pod "kube-scheduler-test-preload-858946" in "kube-system" namespace has status "Ready":"True"
	I0127 11:29:33.569196   57674 pod_ready.go:82] duration metric: took 400.30667ms for pod "kube-scheduler-test-preload-858946" in "kube-system" namespace to be "Ready" ...
	I0127 11:29:33.569206   57674 pod_ready.go:39] duration metric: took 1.600856447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:29:33.569220   57674 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:29:33.569269   57674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:29:33.585043   57674 api_server.go:72] duration metric: took 9.312482347s to wait for apiserver process to appear ...
	I0127 11:29:33.585068   57674 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:29:33.585083   57674 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0127 11:29:33.590111   57674 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0127 11:29:33.590809   57674 api_server.go:141] control plane version: v1.24.4
	I0127 11:29:33.590828   57674 api_server.go:131] duration metric: took 5.754248ms to wait for apiserver health ...
	I0127 11:29:33.590849   57674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:29:33.772193   57674 system_pods.go:59] 7 kube-system pods found
	I0127 11:29:33.772220   57674 system_pods.go:61] "coredns-6d4b75cb6d-b4r6x" [5146f60d-fed8-4e0f-8cb0-850c7e0d58f1] Running
	I0127 11:29:33.772225   57674 system_pods.go:61] "etcd-test-preload-858946" [c039f7d2-9637-4332-8703-5521cf21e4e3] Running
	I0127 11:29:33.772228   57674 system_pods.go:61] "kube-apiserver-test-preload-858946" [f3455eee-5273-4433-bd6a-f930e9d76a44] Running
	I0127 11:29:33.772232   57674 system_pods.go:61] "kube-controller-manager-test-preload-858946" [86671a68-bf0e-44fe-be4a-f454e7255734] Running
	I0127 11:29:33.772235   57674 system_pods.go:61] "kube-proxy-47vvb" [d4b3bfa2-d076-4da9-8a4d-bbea46939a32] Running
	I0127 11:29:33.772238   57674 system_pods.go:61] "kube-scheduler-test-preload-858946" [27f12074-2992-42ab-8171-a6ba27914e1a] Running
	I0127 11:29:33.772241   57674 system_pods.go:61] "storage-provisioner" [0350a8de-2ecf-4dff-8fd4-f2300a89af77] Running
	I0127 11:29:33.772247   57674 system_pods.go:74] duration metric: took 181.3888ms to wait for pod list to return data ...
	I0127 11:29:33.772253   57674 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:29:33.969129   57674 default_sa.go:45] found service account: "default"
	I0127 11:29:33.969157   57674 default_sa.go:55] duration metric: took 196.898483ms for default service account to be created ...
	I0127 11:29:33.969165   57674 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:29:34.170851   57674 system_pods.go:87] 7 kube-system pods found
	I0127 11:29:34.369374   57674 system_pods.go:105] "coredns-6d4b75cb6d-b4r6x" [5146f60d-fed8-4e0f-8cb0-850c7e0d58f1] Running
	I0127 11:29:34.369394   57674 system_pods.go:105] "etcd-test-preload-858946" [c039f7d2-9637-4332-8703-5521cf21e4e3] Running
	I0127 11:29:34.369400   57674 system_pods.go:105] "kube-apiserver-test-preload-858946" [f3455eee-5273-4433-bd6a-f930e9d76a44] Running
	I0127 11:29:34.369404   57674 system_pods.go:105] "kube-controller-manager-test-preload-858946" [86671a68-bf0e-44fe-be4a-f454e7255734] Running
	I0127 11:29:34.369409   57674 system_pods.go:105] "kube-proxy-47vvb" [d4b3bfa2-d076-4da9-8a4d-bbea46939a32] Running
	I0127 11:29:34.369413   57674 system_pods.go:105] "kube-scheduler-test-preload-858946" [27f12074-2992-42ab-8171-a6ba27914e1a] Running
	I0127 11:29:34.369418   57674 system_pods.go:105] "storage-provisioner" [0350a8de-2ecf-4dff-8fd4-f2300a89af77] Running
	I0127 11:29:34.369426   57674 system_pods.go:147] duration metric: took 400.254268ms to wait for k8s-apps to be running ...
	I0127 11:29:34.369434   57674 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:29:34.369492   57674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:29:34.383641   57674 system_svc.go:56] duration metric: took 14.197476ms WaitForService to wait for kubelet
	I0127 11:29:34.383684   57674 kubeadm.go:582] duration metric: took 10.111125327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:29:34.383706   57674 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:29:34.568755   57674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:29:34.568780   57674 node_conditions.go:123] node cpu capacity is 2
	I0127 11:29:34.568793   57674 node_conditions.go:105] duration metric: took 185.080371ms to run NodePressure ...
	I0127 11:29:34.568806   57674 start.go:241] waiting for startup goroutines ...
	I0127 11:29:34.568815   57674 start.go:246] waiting for cluster config update ...
	I0127 11:29:34.568829   57674 start.go:255] writing updated cluster config ...
	I0127 11:29:34.569055   57674 ssh_runner.go:195] Run: rm -f paused
	I0127 11:29:34.614106   57674 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0127 11:29:34.615990   57674 out.go:201] 
	W0127 11:29:34.617322   57674 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0127 11:29:34.618627   57674 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0127 11:29:34.620097   57674 out.go:177] * Done! kubectl is now configured to use "test-preload-858946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.483191539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977375483169987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f24b48d-e576-4a3e-a611-b65fc7f7c743 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.483675975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8ec5f2c-e567-4f19-afe4-85aeeace0680 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.483740829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8ec5f2c-e567-4f19-afe4-85aeeace0680 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.483967536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3067bcae67b3bf15bef8b2ea0af5420cfcbbb0ca3457071d9b62d9f8b6b98e0b,PodSandboxId:04a55dd52f34c77c0c776bf721c36f3a20b954aa694bd55f895cc4705a35351c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737977370058842847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-b4r6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5146f60d-fed8-4e0f-8cb0-850c7e0d58f1,},Annotations:map[string]string{io.kubernetes.container.hash: 766a390c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc37a6b1c0ea91f608ffd8ab3a5acf287d592460f1d6cc92b944c61c482c738,PodSandboxId:e4879a97a860c8a6d611b9176873b8b65379538027c6f22b2b47269b7586780b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737977362913405553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47vvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d4b3bfa2-d076-4da9-8a4d-bbea46939a32,},Annotations:map[string]string{io.kubernetes.container.hash: 963ba696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e8e129b56d337cb055a4a0be245ad57d968a70efd31f213706d510586d054f,PodSandboxId:d2b34d1dc7fa358ba46c3ab3e85f06574d9f6c5f91738c0a36a6e3aab217577e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977362641493652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03
50a8de-2ecf-4dff-8fd4-f2300a89af77,},Annotations:map[string]string{io.kubernetes.container.hash: ae2d1269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82f9bd6169608c850822587d18145333284c5e8904c363a6c7183ab324e1f5cf,PodSandboxId:8249d0e9a9cbddfaf061d0345efdc8281a314922bb19b88fa7014876c6ea8b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737977357694004751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4e4d5c8f15b4d52a2b3d8078b203817f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3dd5ce55128681af1d35714a23d7a2345000553260a7ccdb3b9efaab6d119,PodSandboxId:513d098c295d2ce224e31c0e5842ee930c0572d5383bce6280112b8887205f3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737977357629958443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7f1af44c207748a781a9a452ed54ceb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6903487b42670f3477fbf1ab9a5e93caf9b8dc1c8c84423dcaf176c42d34d3d,PodSandboxId:09e44a7984cd0a77ec18ddf672e0fb4652c86fc76045f87814830108be0f714a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737977357603865360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13cc19c2d545eb3ea20c2ef535af7f14,}
,Annotations:map[string]string{io.kubernetes.container.hash: dfa0895a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b10d8a0c3dcaaa92cf80e4eefb4418aff53be6bb726d6042951fc99b33a9ff1,PodSandboxId:ee1629ca180066fb1034672eaf3017e72e5cd08a27e339cacb0ab98aa87c34f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737977357589373435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b4afa576b0a76d29f6ab666bcafbc5,},Annotation
s:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8ec5f2c-e567-4f19-afe4-85aeeace0680 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.519272843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c84ede8-4458-4fda-baa1-739949579b9c name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.519367441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c84ede8-4458-4fda-baa1-739949579b9c name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.520239036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75e72b69-4eee-42d2-b87e-b9875501652f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.520669830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977375520650261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75e72b69-4eee-42d2-b87e-b9875501652f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.521339640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5931d28c-0596-4914-885c-285759e142b2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.521391860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5931d28c-0596-4914-885c-285759e142b2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.521549964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3067bcae67b3bf15bef8b2ea0af5420cfcbbb0ca3457071d9b62d9f8b6b98e0b,PodSandboxId:04a55dd52f34c77c0c776bf721c36f3a20b954aa694bd55f895cc4705a35351c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737977370058842847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-b4r6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5146f60d-fed8-4e0f-8cb0-850c7e0d58f1,},Annotations:map[string]string{io.kubernetes.container.hash: 766a390c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc37a6b1c0ea91f608ffd8ab3a5acf287d592460f1d6cc92b944c61c482c738,PodSandboxId:e4879a97a860c8a6d611b9176873b8b65379538027c6f22b2b47269b7586780b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737977362913405553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47vvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d4b3bfa2-d076-4da9-8a4d-bbea46939a32,},Annotations:map[string]string{io.kubernetes.container.hash: 963ba696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e8e129b56d337cb055a4a0be245ad57d968a70efd31f213706d510586d054f,PodSandboxId:d2b34d1dc7fa358ba46c3ab3e85f06574d9f6c5f91738c0a36a6e3aab217577e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977362641493652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03
50a8de-2ecf-4dff-8fd4-f2300a89af77,},Annotations:map[string]string{io.kubernetes.container.hash: ae2d1269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82f9bd6169608c850822587d18145333284c5e8904c363a6c7183ab324e1f5cf,PodSandboxId:8249d0e9a9cbddfaf061d0345efdc8281a314922bb19b88fa7014876c6ea8b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737977357694004751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4e4d5c8f15b4d52a2b3d8078b203817f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3dd5ce55128681af1d35714a23d7a2345000553260a7ccdb3b9efaab6d119,PodSandboxId:513d098c295d2ce224e31c0e5842ee930c0572d5383bce6280112b8887205f3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737977357629958443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7f1af44c207748a781a9a452ed54ceb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6903487b42670f3477fbf1ab9a5e93caf9b8dc1c8c84423dcaf176c42d34d3d,PodSandboxId:09e44a7984cd0a77ec18ddf672e0fb4652c86fc76045f87814830108be0f714a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737977357603865360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13cc19c2d545eb3ea20c2ef535af7f14,}
,Annotations:map[string]string{io.kubernetes.container.hash: dfa0895a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b10d8a0c3dcaaa92cf80e4eefb4418aff53be6bb726d6042951fc99b33a9ff1,PodSandboxId:ee1629ca180066fb1034672eaf3017e72e5cd08a27e339cacb0ab98aa87c34f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737977357589373435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b4afa576b0a76d29f6ab666bcafbc5,},Annotation
s:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5931d28c-0596-4914-885c-285759e142b2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.557376379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=951846dc-32a4-4204-9f23-bcf74a12824d name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.557459331Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=951846dc-32a4-4204-9f23-bcf74a12824d name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.558655312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=982268c0-6a5e-448c-81a1-e42e8703f77c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.559180937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977375559157763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=982268c0-6a5e-448c-81a1-e42e8703f77c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.560088379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81c07c75-c6ce-48f1-b56a-5f4a48890829 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.560149645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81c07c75-c6ce-48f1-b56a-5f4a48890829 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.560311445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3067bcae67b3bf15bef8b2ea0af5420cfcbbb0ca3457071d9b62d9f8b6b98e0b,PodSandboxId:04a55dd52f34c77c0c776bf721c36f3a20b954aa694bd55f895cc4705a35351c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737977370058842847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-b4r6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5146f60d-fed8-4e0f-8cb0-850c7e0d58f1,},Annotations:map[string]string{io.kubernetes.container.hash: 766a390c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc37a6b1c0ea91f608ffd8ab3a5acf287d592460f1d6cc92b944c61c482c738,PodSandboxId:e4879a97a860c8a6d611b9176873b8b65379538027c6f22b2b47269b7586780b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737977362913405553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47vvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d4b3bfa2-d076-4da9-8a4d-bbea46939a32,},Annotations:map[string]string{io.kubernetes.container.hash: 963ba696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e8e129b56d337cb055a4a0be245ad57d968a70efd31f213706d510586d054f,PodSandboxId:d2b34d1dc7fa358ba46c3ab3e85f06574d9f6c5f91738c0a36a6e3aab217577e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977362641493652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03
50a8de-2ecf-4dff-8fd4-f2300a89af77,},Annotations:map[string]string{io.kubernetes.container.hash: ae2d1269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82f9bd6169608c850822587d18145333284c5e8904c363a6c7183ab324e1f5cf,PodSandboxId:8249d0e9a9cbddfaf061d0345efdc8281a314922bb19b88fa7014876c6ea8b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737977357694004751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4e4d5c8f15b4d52a2b3d8078b203817f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3dd5ce55128681af1d35714a23d7a2345000553260a7ccdb3b9efaab6d119,PodSandboxId:513d098c295d2ce224e31c0e5842ee930c0572d5383bce6280112b8887205f3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737977357629958443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7f1af44c207748a781a9a452ed54ceb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6903487b42670f3477fbf1ab9a5e93caf9b8dc1c8c84423dcaf176c42d34d3d,PodSandboxId:09e44a7984cd0a77ec18ddf672e0fb4652c86fc76045f87814830108be0f714a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737977357603865360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13cc19c2d545eb3ea20c2ef535af7f14,}
,Annotations:map[string]string{io.kubernetes.container.hash: dfa0895a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b10d8a0c3dcaaa92cf80e4eefb4418aff53be6bb726d6042951fc99b33a9ff1,PodSandboxId:ee1629ca180066fb1034672eaf3017e72e5cd08a27e339cacb0ab98aa87c34f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737977357589373435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b4afa576b0a76d29f6ab666bcafbc5,},Annotation
s:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81c07c75-c6ce-48f1-b56a-5f4a48890829 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.590174710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=637fe396-af6a-4f0d-8c41-8e5162975276 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.590265068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=637fe396-af6a-4f0d-8c41-8e5162975276 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.591293115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fc8c3b4-8944-40fd-895d-af05dd098793 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.591730294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977375591709549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fc8c3b4-8944-40fd-895d-af05dd098793 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.592269495Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afd9b4d6-fc7d-4e78-b70e-ef9e036a471a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.592328825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afd9b4d6-fc7d-4e78-b70e-ef9e036a471a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:29:35 test-preload-858946 crio[664]: time="2025-01-27 11:29:35.592486797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3067bcae67b3bf15bef8b2ea0af5420cfcbbb0ca3457071d9b62d9f8b6b98e0b,PodSandboxId:04a55dd52f34c77c0c776bf721c36f3a20b954aa694bd55f895cc4705a35351c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737977370058842847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-b4r6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5146f60d-fed8-4e0f-8cb0-850c7e0d58f1,},Annotations:map[string]string{io.kubernetes.container.hash: 766a390c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc37a6b1c0ea91f608ffd8ab3a5acf287d592460f1d6cc92b944c61c482c738,PodSandboxId:e4879a97a860c8a6d611b9176873b8b65379538027c6f22b2b47269b7586780b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737977362913405553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47vvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d4b3bfa2-d076-4da9-8a4d-bbea46939a32,},Annotations:map[string]string{io.kubernetes.container.hash: 963ba696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e8e129b56d337cb055a4a0be245ad57d968a70efd31f213706d510586d054f,PodSandboxId:d2b34d1dc7fa358ba46c3ab3e85f06574d9f6c5f91738c0a36a6e3aab217577e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737977362641493652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03
50a8de-2ecf-4dff-8fd4-f2300a89af77,},Annotations:map[string]string{io.kubernetes.container.hash: ae2d1269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82f9bd6169608c850822587d18145333284c5e8904c363a6c7183ab324e1f5cf,PodSandboxId:8249d0e9a9cbddfaf061d0345efdc8281a314922bb19b88fa7014876c6ea8b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737977357694004751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4e4d5c8f15b4d52a2b3d8078b203817f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3dd5ce55128681af1d35714a23d7a2345000553260a7ccdb3b9efaab6d119,PodSandboxId:513d098c295d2ce224e31c0e5842ee930c0572d5383bce6280112b8887205f3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737977357629958443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7f1af44c207748a781a9a452ed54ceb1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6903487b42670f3477fbf1ab9a5e93caf9b8dc1c8c84423dcaf176c42d34d3d,PodSandboxId:09e44a7984cd0a77ec18ddf672e0fb4652c86fc76045f87814830108be0f714a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737977357603865360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13cc19c2d545eb3ea20c2ef535af7f14,}
,Annotations:map[string]string{io.kubernetes.container.hash: dfa0895a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b10d8a0c3dcaaa92cf80e4eefb4418aff53be6bb726d6042951fc99b33a9ff1,PodSandboxId:ee1629ca180066fb1034672eaf3017e72e5cd08a27e339cacb0ab98aa87c34f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737977357589373435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b4afa576b0a76d29f6ab666bcafbc5,},Annotation
s:map[string]string{io.kubernetes.container.hash: f8e9c603,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afd9b4d6-fc7d-4e78-b70e-ef9e036a471a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3067bcae67b3b       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   04a55dd52f34c       coredns-6d4b75cb6d-b4r6x
	acc37a6b1c0ea       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   e4879a97a860c       kube-proxy-47vvb
	55e8e129b56d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   d2b34d1dc7fa3       storage-provisioner
	82f9bd6169608       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   8249d0e9a9cbd       kube-controller-manager-test-preload-858946
	1dd3dd5ce5512       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   513d098c295d2       kube-scheduler-test-preload-858946
	d6903487b4267       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   09e44a7984cd0       etcd-test-preload-858946
	6b10d8a0c3dca       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   ee1629ca18006       kube-apiserver-test-preload-858946
	
	
	==> coredns [3067bcae67b3bf15bef8b2ea0af5420cfcbbb0ca3457071d9b62d9f8b6b98e0b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:33119 - 8142 "HINFO IN 6619090817830779735.8289701190044572825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037287709s
	
	
	==> describe nodes <==
	Name:               test-preload-858946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-858946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=test-preload-858946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_26_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-858946
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:29:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:29:31 +0000   Mon, 27 Jan 2025 11:26:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:29:31 +0000   Mon, 27 Jan 2025 11:26:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:29:31 +0000   Mon, 27 Jan 2025 11:26:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:29:31 +0000   Mon, 27 Jan 2025 11:29:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    test-preload-858946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09077e4dd03548bcb0b246074350ef3c
	  System UUID:                09077e4d-d035-48bc-b0b2-46074350ef3c
	  Boot ID:                    488b9cbb-a7dc-4f0c-bb58-78a2b78d0c7f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-b4r6x                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m16s
	  kube-system                 etcd-test-preload-858946                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m30s
	  kube-system                 kube-apiserver-test-preload-858946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kube-controller-manager-test-preload-858946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kube-proxy-47vvb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 kube-scheduler-test-preload-858946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12s                    kube-proxy       
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m36s (x4 over 3m36s)  kubelet          Node test-preload-858946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s (x4 over 3m36s)  kubelet          Node test-preload-858946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s (x4 over 3m36s)  kubelet          Node test-preload-858946 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m29s                  kubelet          Node test-preload-858946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s                  kubelet          Node test-preload-858946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s                  kubelet          Node test-preload-858946 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m18s                  kubelet          Node test-preload-858946 status is now: NodeReady
	  Normal  RegisteredNode           3m16s                  node-controller  Node test-preload-858946 event: Registered Node test-preload-858946 in Controller
	  Normal  Starting                 19s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18s (x8 over 19s)      kubelet          Node test-preload-858946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 19s)      kubelet          Node test-preload-858946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 19s)      kubelet          Node test-preload-858946 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           1s                     node-controller  Node test-preload-858946 event: Registered Node test-preload-858946 in Controller
	
	
	==> dmesg <==
	[Jan27 11:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048719] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037171] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.798654] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.898543] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.549397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 11:29] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055631] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057214] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.167102] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.141203] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.260948] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +12.784338] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.055899] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.558263] systemd-fstab-generator[1114]: Ignoring "noauto" option for root device
	[  +5.847988] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.745816] systemd-fstab-generator[1771]: Ignoring "noauto" option for root device
	[  +5.521789] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d6903487b42670f3477fbf1ab9a5e93caf9b8dc1c8c84423dcaf176c42d34d3d] <==
	{"level":"info","ts":"2025-01-27T11:29:17.858Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"be6e2cf5fb13c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T11:29:17.861Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T11:29:17.861Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"be6e2cf5fb13c","initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T11:29:17.861Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T11:29:17.861Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T11:29:17.863Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2025-01-27T11:29:17.863Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2025-01-27T11:29:17.863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c switched to configuration voters=(3350086559969596)"}
	{"level":"info","ts":"2025-01-27T11:29:17.863Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","added-peer-id":"be6e2cf5fb13c","added-peer-peer-urls":["https://192.168.39.61:2380"]}
	{"level":"info","ts":"2025-01-27T11:29:17.863Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:29:17.863Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:29:19.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T11:29:19.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T11:29:19.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgPreVoteResp from be6e2cf5fb13c at term 2"}
	{"level":"info","ts":"2025-01-27T11:29:19.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T11:29:19.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgVoteResp from be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2025-01-27T11:29:19.340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became leader at term 3"}
	{"level":"info","ts":"2025-01-27T11:29:19.340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be6e2cf5fb13c elected leader be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2025-01-27T11:29:19.340Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"be6e2cf5fb13c","local-member-attributes":"{Name:test-preload-858946 ClientURLs:[https://192.168.39.61:2379]}","request-path":"/0/members/be6e2cf5fb13c/attributes","cluster-id":"855213fb0218a9ad","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T11:29:19.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:29:19.343Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.61:2379"}
	{"level":"info","ts":"2025-01-27T11:29:19.343Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:29:19.344Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T11:29:19.346Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T11:29:19.346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:29:35 up 0 min,  0 users,  load average: 1.19, 0.30, 0.10
	Linux test-preload-858946 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6b10d8a0c3dcaaa92cf80e4eefb4418aff53be6bb726d6042951fc99b33a9ff1] <==
	I0127 11:29:21.589539       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0127 11:29:21.589565       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 11:29:21.589578       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0127 11:29:21.589625       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 11:29:21.609349       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 11:29:21.620416       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0127 11:29:21.620487       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0127 11:29:21.681769       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0127 11:29:21.685151       1 cache.go:39] Caches are synced for autoregister controller
	I0127 11:29:21.685425       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 11:29:21.687872       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 11:29:21.689264       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 11:29:21.713742       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 11:29:21.719260       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0127 11:29:21.720784       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0127 11:29:22.291587       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 11:29:22.585199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 11:29:23.192371       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 11:29:23.208670       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 11:29:23.235288       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0127 11:29:23.252894       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 11:29:23.269875       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 11:29:23.275874       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 11:29:34.274393       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 11:29:34.341039       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [82f9bd6169608c850822587d18145333284c5e8904c363a6c7183ab324e1f5cf] <==
	I0127 11:29:34.313208       1 shared_informer.go:262] Caches are synced for daemon sets
	I0127 11:29:34.317102       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0127 11:29:34.318467       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0127 11:29:34.321866       1 shared_informer.go:262] Caches are synced for attach detach
	I0127 11:29:34.326330       1 shared_informer.go:262] Caches are synced for crt configmap
	I0127 11:29:34.327496       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0127 11:29:34.330535       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0127 11:29:34.331699       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0127 11:29:34.345482       1 shared_informer.go:262] Caches are synced for taint
	I0127 11:29:34.345557       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0127 11:29:34.345673       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-858946. Assuming now as a timestamp.
	I0127 11:29:34.345711       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0127 11:29:34.346175       1 event.go:294] "Event occurred" object="test-preload-858946" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-858946 event: Registered Node test-preload-858946 in Controller"
	I0127 11:29:34.346215       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0127 11:29:34.349403       1 shared_informer.go:262] Caches are synced for persistent volume
	I0127 11:29:34.483719       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 11:29:34.506079       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0127 11:29:34.507295       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0127 11:29:34.515599       1 shared_informer.go:262] Caches are synced for deployment
	I0127 11:29:34.516749       1 shared_informer.go:262] Caches are synced for disruption
	I0127 11:29:34.516786       1 disruption.go:371] Sending events to api server.
	I0127 11:29:34.525938       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 11:29:34.966847       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 11:29:35.026079       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 11:29:35.026115       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [acc37a6b1c0ea91f608ffd8ab3a5acf287d592460f1d6cc92b944c61c482c738] <==
	I0127 11:29:23.124639       1 node.go:163] Successfully retrieved node IP: 192.168.39.61
	I0127 11:29:23.124889       1 server_others.go:138] "Detected node IP" address="192.168.39.61"
	I0127 11:29:23.124970       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 11:29:23.197931       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0127 11:29:23.197960       1 server_others.go:206] "Using iptables Proxier"
	I0127 11:29:23.205453       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 11:29:23.207257       1 server.go:661] "Version info" version="v1.24.4"
	I0127 11:29:23.207282       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:29:23.209293       1 config.go:317] "Starting service config controller"
	I0127 11:29:23.210274       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 11:29:23.210356       1 config.go:226] "Starting endpoint slice config controller"
	I0127 11:29:23.210382       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 11:29:23.213026       1 config.go:444] "Starting node config controller"
	I0127 11:29:23.213034       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 11:29:23.310477       1 shared_informer.go:262] Caches are synced for service config
	I0127 11:29:23.310599       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0127 11:29:23.313406       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [1dd3dd5ce55128681af1d35714a23d7a2345000553260a7ccdb3b9efaab6d119] <==
	I0127 11:29:18.370496       1 serving.go:348] Generated self-signed cert in-memory
	W0127 11:29:21.632862       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 11:29:21.632982       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 11:29:21.633013       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 11:29:21.633067       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 11:29:21.687170       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0127 11:29:21.688434       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:29:21.701316       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0127 11:29:21.703059       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 11:29:21.703171       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 11:29:21.703024       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0127 11:29:21.805194       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.930023    1121 apiserver.go:52] "Watching apiserver"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.933339    1121 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.933559    1121 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.933662    1121 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: E0127 11:29:21.934695    1121 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-b4r6x" podUID=5146f60d-fed8-4e0f-8cb0-850c7e0d58f1
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: E0127 11:29:21.978293    1121 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.986389    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvxhc\" (UniqueName: \"kubernetes.io/projected/d4b3bfa2-d076-4da9-8a4d-bbea46939a32-kube-api-access-bvxhc\") pod \"kube-proxy-47vvb\" (UID: \"d4b3bfa2-d076-4da9-8a4d-bbea46939a32\") " pod="kube-system/kube-proxy-47vvb"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.986879    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0350a8de-2ecf-4dff-8fd4-f2300a89af77-tmp\") pod \"storage-provisioner\" (UID: \"0350a8de-2ecf-4dff-8fd4-f2300a89af77\") " pod="kube-system/storage-provisioner"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.986993    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume\") pod \"coredns-6d4b75cb6d-b4r6x\" (UID: \"5146f60d-fed8-4e0f-8cb0-850c7e0d58f1\") " pod="kube-system/coredns-6d4b75cb6d-b4r6x"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.987086    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4b3bfa2-d076-4da9-8a4d-bbea46939a32-xtables-lock\") pod \"kube-proxy-47vvb\" (UID: \"d4b3bfa2-d076-4da9-8a4d-bbea46939a32\") " pod="kube-system/kube-proxy-47vvb"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.987167    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4b3bfa2-d076-4da9-8a4d-bbea46939a32-kube-proxy\") pod \"kube-proxy-47vvb\" (UID: \"d4b3bfa2-d076-4da9-8a4d-bbea46939a32\") " pod="kube-system/kube-proxy-47vvb"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.987212    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4b3bfa2-d076-4da9-8a4d-bbea46939a32-lib-modules\") pod \"kube-proxy-47vvb\" (UID: \"d4b3bfa2-d076-4da9-8a4d-bbea46939a32\") " pod="kube-system/kube-proxy-47vvb"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.987301    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbqr7\" (UniqueName: \"kubernetes.io/projected/0350a8de-2ecf-4dff-8fd4-f2300a89af77-kube-api-access-jbqr7\") pod \"storage-provisioner\" (UID: \"0350a8de-2ecf-4dff-8fd4-f2300a89af77\") " pod="kube-system/storage-provisioner"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.987432    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6n6g\" (UniqueName: \"kubernetes.io/projected/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-kube-api-access-d6n6g\") pod \"coredns-6d4b75cb6d-b4r6x\" (UID: \"5146f60d-fed8-4e0f-8cb0-850c7e0d58f1\") " pod="kube-system/coredns-6d4b75cb6d-b4r6x"
	Jan 27 11:29:21 test-preload-858946 kubelet[1121]: I0127 11:29:21.987641    1121 reconciler.go:159] "Reconciler: start to sync state"
	Jan 27 11:29:22 test-preload-858946 kubelet[1121]: E0127 11:29:22.089555    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 11:29:22 test-preload-858946 kubelet[1121]: E0127 11:29:22.089860    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume podName:5146f60d-fed8-4e0f-8cb0-850c7e0d58f1 nodeName:}" failed. No retries permitted until 2025-01-27 11:29:22.589736108 +0000 UTC m=+5.777800948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume") pod "coredns-6d4b75cb6d-b4r6x" (UID: "5146f60d-fed8-4e0f-8cb0-850c7e0d58f1") : object "kube-system"/"coredns" not registered
	Jan 27 11:29:22 test-preload-858946 kubelet[1121]: E0127 11:29:22.592477    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 11:29:22 test-preload-858946 kubelet[1121]: E0127 11:29:22.592556    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume podName:5146f60d-fed8-4e0f-8cb0-850c7e0d58f1 nodeName:}" failed. No retries permitted until 2025-01-27 11:29:23.592540726 +0000 UTC m=+6.780605555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume") pod "coredns-6d4b75cb6d-b4r6x" (UID: "5146f60d-fed8-4e0f-8cb0-850c7e0d58f1") : object "kube-system"/"coredns" not registered
	Jan 27 11:29:23 test-preload-858946 kubelet[1121]: E0127 11:29:23.601576    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 11:29:23 test-preload-858946 kubelet[1121]: E0127 11:29:23.601661    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume podName:5146f60d-fed8-4e0f-8cb0-850c7e0d58f1 nodeName:}" failed. No retries permitted until 2025-01-27 11:29:25.601647308 +0000 UTC m=+8.789712147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume") pod "coredns-6d4b75cb6d-b4r6x" (UID: "5146f60d-fed8-4e0f-8cb0-850c7e0d58f1") : object "kube-system"/"coredns" not registered
	Jan 27 11:29:24 test-preload-858946 kubelet[1121]: E0127 11:29:24.024727    1121 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-b4r6x" podUID=5146f60d-fed8-4e0f-8cb0-850c7e0d58f1
	Jan 27 11:29:25 test-preload-858946 kubelet[1121]: E0127 11:29:25.616792    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 11:29:25 test-preload-858946 kubelet[1121]: E0127 11:29:25.617198    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume podName:5146f60d-fed8-4e0f-8cb0-850c7e0d58f1 nodeName:}" failed. No retries permitted until 2025-01-27 11:29:29.617170059 +0000 UTC m=+12.805234904 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5146f60d-fed8-4e0f-8cb0-850c7e0d58f1-config-volume") pod "coredns-6d4b75cb6d-b4r6x" (UID: "5146f60d-fed8-4e0f-8cb0-850c7e0d58f1") : object "kube-system"/"coredns" not registered
	Jan 27 11:29:26 test-preload-858946 kubelet[1121]: E0127 11:29:26.026018    1121 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-b4r6x" podUID=5146f60d-fed8-4e0f-8cb0-850c7e0d58f1
	
	
	==> storage-provisioner [55e8e129b56d337cb055a4a0be245ad57d968a70efd31f213706d510586d054f] <==
	I0127 11:29:22.706081       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-858946 -n test-preload-858946
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-858946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-858946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-858946
--- FAIL: TestPreload (281.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (403.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m33.735222486s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-480798] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-480798" primary control-plane node in "kubernetes-upgrade-480798" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:32:52.876129   60315 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:32:52.877427   60315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:32:52.877450   60315 out.go:358] Setting ErrFile to fd 2...
	I0127 11:32:52.877460   60315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:32:52.878102   60315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:32:52.878702   60315 out.go:352] Setting JSON to false
	I0127 11:32:52.879689   60315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8073,"bootTime":1737969500,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:32:52.879781   60315 start.go:139] virtualization: kvm guest
	I0127 11:32:52.881888   60315 out.go:177] * [kubernetes-upgrade-480798] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:32:52.883249   60315 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:32:52.883317   60315 notify.go:220] Checking for updates...
	I0127 11:32:52.885929   60315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:32:52.887114   60315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:32:52.888333   60315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:32:52.889507   60315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:32:52.890659   60315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:32:52.892386   60315 config.go:182] Loaded profile config "pause-900843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:32:52.892572   60315 config.go:182] Loaded profile config "running-upgrade-968925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:32:52.892698   60315 config.go:182] Loaded profile config "stopped-upgrade-943115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:32:52.892821   60315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:32:52.937387   60315 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 11:32:52.938618   60315 start.go:297] selected driver: kvm2
	I0127 11:32:52.938632   60315 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:32:52.938647   60315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:32:52.939398   60315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:32:52.939515   60315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:32:52.956571   60315 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:32:52.956638   60315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:32:52.956942   60315 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:32:52.956975   60315 cni.go:84] Creating CNI manager for ""
	I0127 11:32:52.957015   60315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:32:52.957028   60315 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:32:52.957134   60315 start.go:340] cluster config:
	{Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:32:52.957259   60315 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:32:52.959262   60315 out.go:177] * Starting "kubernetes-upgrade-480798" primary control-plane node in "kubernetes-upgrade-480798" cluster
	I0127 11:32:52.960583   60315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:32:52.960625   60315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:32:52.960637   60315 cache.go:56] Caching tarball of preloaded images
	I0127 11:32:52.960729   60315 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:32:52.960746   60315 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 11:32:52.960857   60315 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/config.json ...
	I0127 11:32:52.960883   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/config.json: {Name:mk6d3885f3281a247e9d508f1fa412ba0071821d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:32:52.961038   60315 start.go:360] acquireMachinesLock for kubernetes-upgrade-480798: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:32:58.704405   60315 start.go:364] duration metric: took 5.743333439s to acquireMachinesLock for "kubernetes-upgrade-480798"
	I0127 11:32:58.704493   60315 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-480798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:32:58.704610   60315 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 11:32:58.706058   60315 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:32:58.706282   60315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:32:58.706331   60315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:32:58.725335   60315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0127 11:32:58.725769   60315 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:32:58.726361   60315 main.go:141] libmachine: Using API Version  1
	I0127 11:32:58.726385   60315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:32:58.726734   60315 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:32:58.726941   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetMachineName
	I0127 11:32:58.727095   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:32:58.727276   60315 start.go:159] libmachine.API.Create for "kubernetes-upgrade-480798" (driver="kvm2")
	I0127 11:32:58.727302   60315 client.go:168] LocalClient.Create starting
	I0127 11:32:58.727352   60315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem
	I0127 11:32:58.727395   60315 main.go:141] libmachine: Decoding PEM data...
	I0127 11:32:58.727419   60315 main.go:141] libmachine: Parsing certificate...
	I0127 11:32:58.727496   60315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem
	I0127 11:32:58.727526   60315 main.go:141] libmachine: Decoding PEM data...
	I0127 11:32:58.727540   60315 main.go:141] libmachine: Parsing certificate...
	I0127 11:32:58.727561   60315 main.go:141] libmachine: Running pre-create checks...
	I0127 11:32:58.727576   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .PreCreateCheck
	I0127 11:32:58.727952   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetConfigRaw
	I0127 11:32:58.728445   60315 main.go:141] libmachine: Creating machine...
	I0127 11:32:58.728462   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .Create
	I0127 11:32:58.728597   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) creating KVM machine...
	I0127 11:32:58.728619   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) creating network...
	I0127 11:32:58.731119   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found existing default KVM network
	I0127 11:32:58.734252   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:58.734038   60383 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 11:32:58.735389   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:58.735292   60383 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:cc:34:91} reservation:<nil>}
	I0127 11:32:58.736489   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:58.736401   60383 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:99:f5:ae} reservation:<nil>}
	I0127 11:32:58.737736   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:58.737646   60383 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:6f:7b:4f} reservation:<nil>}
	I0127 11:32:58.739222   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:58.739142   60383 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003be610}
	I0127 11:32:58.739275   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | created network xml: 
	I0127 11:32:58.739296   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | <network>
	I0127 11:32:58.739309   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |   <name>mk-kubernetes-upgrade-480798</name>
	I0127 11:32:58.739331   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |   <dns enable='no'/>
	I0127 11:32:58.739368   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |   
	I0127 11:32:58.739392   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0127 11:32:58.739405   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |     <dhcp>
	I0127 11:32:58.739413   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0127 11:32:58.739424   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |     </dhcp>
	I0127 11:32:58.739435   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |   </ip>
	I0127 11:32:58.739442   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG |   
	I0127 11:32:58.739448   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | </network>
	I0127 11:32:58.739458   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | 
	I0127 11:32:58.744553   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | trying to create private KVM network mk-kubernetes-upgrade-480798 192.168.83.0/24...
	I0127 11:32:58.821005   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | private KVM network mk-kubernetes-upgrade-480798 192.168.83.0/24 created
	I0127 11:32:58.821128   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting up store path in /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798 ...
	I0127 11:32:58.821179   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) building disk image from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:32:58.821196   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:58.821075   60383 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:32:58.821337   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Downloading /home/jenkins/minikube-integration/20319-18835/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:32:59.096131   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:59.095997   60383 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa...
	I0127 11:32:59.272282   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:59.272169   60383 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/kubernetes-upgrade-480798.rawdisk...
	I0127 11:32:59.272316   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | Writing magic tar header
	I0127 11:32:59.272329   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | Writing SSH key tar header
	I0127 11:32:59.272336   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:32:59.272297   60383 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798 ...
	I0127 11:32:59.272429   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798
	I0127 11:32:59.272463   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798 (perms=drwx------)
	I0127 11:32:59.272477   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines
	I0127 11:32:59.272499   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:32:59.272512   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835
	I0127 11:32:59.272530   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 11:32:59.272563   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines (perms=drwxr-xr-x)
	I0127 11:32:59.272584   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube (perms=drwxr-xr-x)
	I0127 11:32:59.272594   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home/jenkins
	I0127 11:32:59.272609   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | checking permissions on dir: /home
	I0127 11:32:59.272621   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | skipping /home - not owner
	I0127 11:32:59.272638   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting executable bit set on /home/jenkins/minikube-integration/20319-18835 (perms=drwxrwxr-x)
	I0127 11:32:59.272649   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 11:32:59.272656   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 11:32:59.272671   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) creating domain...
	I0127 11:32:59.273820   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) define libvirt domain using xml: 
	I0127 11:32:59.273841   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) <domain type='kvm'>
	I0127 11:32:59.273856   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <name>kubernetes-upgrade-480798</name>
	I0127 11:32:59.273869   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <memory unit='MiB'>2200</memory>
	I0127 11:32:59.273882   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <vcpu>2</vcpu>
	I0127 11:32:59.273893   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <features>
	I0127 11:32:59.273913   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <acpi/>
	I0127 11:32:59.273930   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <apic/>
	I0127 11:32:59.273958   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <pae/>
	I0127 11:32:59.273991   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     
	I0127 11:32:59.274000   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   </features>
	I0127 11:32:59.274019   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <cpu mode='host-passthrough'>
	I0127 11:32:59.274030   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   
	I0127 11:32:59.274039   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   </cpu>
	I0127 11:32:59.274048   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <os>
	I0127 11:32:59.274060   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <type>hvm</type>
	I0127 11:32:59.274071   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <boot dev='cdrom'/>
	I0127 11:32:59.274082   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <boot dev='hd'/>
	I0127 11:32:59.274093   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <bootmenu enable='no'/>
	I0127 11:32:59.274107   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   </os>
	I0127 11:32:59.274123   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   <devices>
	I0127 11:32:59.274137   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <disk type='file' device='cdrom'>
	I0127 11:32:59.274155   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/boot2docker.iso'/>
	I0127 11:32:59.274167   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <target dev='hdc' bus='scsi'/>
	I0127 11:32:59.274179   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <readonly/>
	I0127 11:32:59.274191   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </disk>
	I0127 11:32:59.274201   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <disk type='file' device='disk'>
	I0127 11:32:59.274218   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 11:32:59.274235   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/kubernetes-upgrade-480798.rawdisk'/>
	I0127 11:32:59.274252   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <target dev='hda' bus='virtio'/>
	I0127 11:32:59.274269   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </disk>
	I0127 11:32:59.274280   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <interface type='network'>
	I0127 11:32:59.274292   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <source network='mk-kubernetes-upgrade-480798'/>
	I0127 11:32:59.274300   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <model type='virtio'/>
	I0127 11:32:59.274309   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </interface>
	I0127 11:32:59.274318   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <interface type='network'>
	I0127 11:32:59.274337   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <source network='default'/>
	I0127 11:32:59.274348   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <model type='virtio'/>
	I0127 11:32:59.274358   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </interface>
	I0127 11:32:59.274369   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <serial type='pty'>
	I0127 11:32:59.274379   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <target port='0'/>
	I0127 11:32:59.274406   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </serial>
	I0127 11:32:59.274424   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <console type='pty'>
	I0127 11:32:59.274439   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <target type='serial' port='0'/>
	I0127 11:32:59.274449   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </console>
	I0127 11:32:59.274457   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     <rng model='virtio'>
	I0127 11:32:59.274465   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)       <backend model='random'>/dev/random</backend>
	I0127 11:32:59.274477   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     </rng>
	I0127 11:32:59.274490   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     
	I0127 11:32:59.274507   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)     
	I0127 11:32:59.274522   60315 main.go:141] libmachine: (kubernetes-upgrade-480798)   </devices>
	I0127 11:32:59.274534   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) </domain>
	I0127 11:32:59.274539   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) 
	I0127 11:32:59.278459   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:5a:7b:37 in network default
	I0127 11:32:59.279039   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:32:59.279057   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) starting domain...
	I0127 11:32:59.279065   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) ensuring networks are active...
	I0127 11:32:59.279762   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Ensuring network default is active
	I0127 11:32:59.280071   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Ensuring network mk-kubernetes-upgrade-480798 is active
	I0127 11:32:59.280594   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) getting domain XML...
	I0127 11:32:59.281307   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) creating domain...
	I0127 11:33:00.581084   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) waiting for IP...
	I0127 11:33:00.581746   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:00.582133   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:00.582178   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:00.582120   60383 retry.go:31] will retry after 226.087443ms: waiting for domain to come up
	I0127 11:33:00.809610   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:00.810136   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:00.810166   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:00.810114   60383 retry.go:31] will retry after 243.977324ms: waiting for domain to come up
	I0127 11:33:01.055753   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:01.056205   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:01.056233   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:01.056184   60383 retry.go:31] will retry after 485.573092ms: waiting for domain to come up
	I0127 11:33:01.543562   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:01.544068   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:01.544093   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:01.544044   60383 retry.go:31] will retry after 498.610034ms: waiting for domain to come up
	I0127 11:33:02.044969   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:02.045518   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:02.045552   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:02.045484   60383 retry.go:31] will retry after 743.016058ms: waiting for domain to come up
	I0127 11:33:02.789885   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:02.790310   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:02.790356   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:02.790283   60383 retry.go:31] will retry after 607.241877ms: waiting for domain to come up
	I0127 11:33:03.398787   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:03.399326   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:03.399357   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:03.399308   60383 retry.go:31] will retry after 831.85584ms: waiting for domain to come up
	I0127 11:33:04.232507   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:04.233102   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:04.233141   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:04.233069   60383 retry.go:31] will retry after 1.190288919s: waiting for domain to come up
	I0127 11:33:05.424807   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:05.425403   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:05.425436   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:05.425376   60383 retry.go:31] will retry after 1.142156951s: waiting for domain to come up
	I0127 11:33:06.568772   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:06.569273   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:06.569296   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:06.569238   60383 retry.go:31] will retry after 2.171250872s: waiting for domain to come up
	I0127 11:33:08.742608   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:08.743227   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:08.743257   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:08.743195   60383 retry.go:31] will retry after 2.205385037s: waiting for domain to come up
	I0127 11:33:10.949839   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:10.950292   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:10.950346   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:10.950280   60383 retry.go:31] will retry after 2.611064284s: waiting for domain to come up
	I0127 11:33:13.564920   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:13.565327   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:13.565365   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:13.565297   60383 retry.go:31] will retry after 3.49811538s: waiting for domain to come up
	I0127 11:33:17.065853   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:17.066353   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find current IP address of domain kubernetes-upgrade-480798 in network mk-kubernetes-upgrade-480798
	I0127 11:33:17.066381   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | I0127 11:33:17.066304   60383 retry.go:31] will retry after 4.116949272s: waiting for domain to come up
	I0127 11:33:21.184895   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.185610   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) found domain IP: 192.168.83.73
	I0127 11:33:21.185634   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) reserving static IP address...
	I0127 11:33:21.185648   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has current primary IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.186068   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-480798", mac: "52:54:00:19:4c:2c", ip: "192.168.83.73"} in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.265278   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) reserved static IP address 192.168.83.73 for domain kubernetes-upgrade-480798
	I0127 11:33:21.265303   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) waiting for SSH...
	I0127 11:33:21.265314   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | Getting to WaitForSSH function...
	I0127 11:33:21.268458   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.268992   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.269024   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.269301   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | Using SSH client type: external
	I0127 11:33:21.269358   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa (-rw-------)
	I0127 11:33:21.269402   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:33:21.269422   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | About to run SSH command:
	I0127 11:33:21.269443   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | exit 0
	I0127 11:33:21.400121   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | SSH cmd err, output: <nil>: 
	I0127 11:33:21.400406   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) KVM machine creation complete
	I0127 11:33:21.400795   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetConfigRaw
	I0127 11:33:21.401470   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:21.401670   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:21.401871   60315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 11:33:21.401898   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetState
	I0127 11:33:21.403472   60315 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 11:33:21.403488   60315 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 11:33:21.403500   60315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 11:33:21.403506   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:21.406313   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.406794   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.406835   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.406990   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:21.407171   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.407366   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.407525   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:21.407728   60315 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:21.407969   60315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0127 11:33:21.407981   60315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 11:33:21.523738   60315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:33:21.523778   60315 main.go:141] libmachine: Detecting the provisioner...
	I0127 11:33:21.523789   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:21.526967   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.527376   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.527421   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.527641   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:21.527832   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.527994   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.528135   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:21.528305   60315 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:21.528482   60315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0127 11:33:21.528493   60315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 11:33:21.637079   60315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 11:33:21.637177   60315 main.go:141] libmachine: found compatible host: buildroot
	I0127 11:33:21.637188   60315 main.go:141] libmachine: Provisioning with buildroot...
	I0127 11:33:21.637196   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetMachineName
	I0127 11:33:21.637474   60315 buildroot.go:166] provisioning hostname "kubernetes-upgrade-480798"
	I0127 11:33:21.637506   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetMachineName
	I0127 11:33:21.637710   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:21.640913   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.641342   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.641368   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.641590   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:21.641764   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.641903   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.642037   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:21.642213   60315 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:21.642404   60315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0127 11:33:21.642417   60315 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-480798 && echo "kubernetes-upgrade-480798" | sudo tee /etc/hostname
	I0127 11:33:21.769092   60315 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-480798
	
	I0127 11:33:21.769127   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:21.772148   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.772526   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.772555   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.772742   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:21.772912   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.773135   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:21.773300   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:21.773511   60315 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:21.773688   60315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0127 11:33:21.773704   60315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-480798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-480798/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-480798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:33:21.887679   60315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:33:21.887723   60315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:33:21.887739   60315 buildroot.go:174] setting up certificates
	I0127 11:33:21.887747   60315 provision.go:84] configureAuth start
	I0127 11:33:21.887761   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetMachineName
	I0127 11:33:21.888014   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetIP
	I0127 11:33:21.890723   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.891113   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.891146   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.891336   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:21.893810   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.894196   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:21.894224   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:21.894337   60315 provision.go:143] copyHostCerts
	I0127 11:33:21.894418   60315 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:33:21.894439   60315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:33:21.894511   60315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:33:21.894650   60315 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:33:21.894663   60315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:33:21.894707   60315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:33:21.894803   60315 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:33:21.894813   60315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:33:21.894845   60315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:33:21.894949   60315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-480798 san=[127.0.0.1 192.168.83.73 kubernetes-upgrade-480798 localhost minikube]
	I0127 11:33:22.108935   60315 provision.go:177] copyRemoteCerts
	I0127 11:33:22.109007   60315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:33:22.109051   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:22.111748   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.112057   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.112088   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.112291   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:22.112528   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.112693   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:22.112825   60315 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa Username:docker}
	I0127 11:33:22.200973   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:33:22.226777   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 11:33:22.249468   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:33:22.272219   60315 provision.go:87] duration metric: took 384.459449ms to configureAuth
	I0127 11:33:22.272247   60315 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:33:22.272447   60315 config.go:182] Loaded profile config "kubernetes-upgrade-480798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:33:22.272537   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:22.275231   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.275639   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.275675   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.275845   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:22.276031   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.276172   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.276308   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:22.276475   60315 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:22.276682   60315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0127 11:33:22.276706   60315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:33:22.504723   60315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:33:22.504751   60315 main.go:141] libmachine: Checking connection to Docker...
	I0127 11:33:22.504762   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetURL
	I0127 11:33:22.506265   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | using libvirt version 6000000
	I0127 11:33:22.508462   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.508813   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.508851   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.508952   60315 main.go:141] libmachine: Docker is up and running!
	I0127 11:33:22.508966   60315 main.go:141] libmachine: Reticulating splines...
	I0127 11:33:22.508973   60315 client.go:171] duration metric: took 23.781663574s to LocalClient.Create
	I0127 11:33:22.509000   60315 start.go:167] duration metric: took 23.781726566s to libmachine.API.Create "kubernetes-upgrade-480798"
	I0127 11:33:22.509014   60315 start.go:293] postStartSetup for "kubernetes-upgrade-480798" (driver="kvm2")
	I0127 11:33:22.509028   60315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:33:22.509052   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:22.509281   60315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:33:22.509304   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:22.511358   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.511745   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.511783   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.511888   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:22.512071   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.512234   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:22.512372   60315 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa Username:docker}
	I0127 11:33:22.593881   60315 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:33:22.598636   60315 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:33:22.598662   60315 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:33:22.598734   60315 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:33:22.598817   60315 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:33:22.598899   60315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:33:22.607867   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:22.632358   60315 start.go:296] duration metric: took 123.331179ms for postStartSetup
	I0127 11:33:22.632409   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetConfigRaw
	I0127 11:33:22.633054   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetIP
	I0127 11:33:22.635899   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.636305   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.636338   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.636572   60315 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/config.json ...
	I0127 11:33:22.636737   60315 start.go:128] duration metric: took 23.932116357s to createHost
	I0127 11:33:22.636759   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:22.638806   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.639122   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.639146   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.639311   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:22.639515   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.639680   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.639815   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:22.639990   60315 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:22.640181   60315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0127 11:33:22.640192   60315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:33:22.743753   60315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977602.687891122
	
	I0127 11:33:22.743782   60315 fix.go:216] guest clock: 1737977602.687891122
	I0127 11:33:22.743789   60315 fix.go:229] Guest: 2025-01-27 11:33:22.687891122 +0000 UTC Remote: 2025-01-27 11:33:22.636747623 +0000 UTC m=+29.798514211 (delta=51.143499ms)
	I0127 11:33:22.743819   60315 fix.go:200] guest clock delta is within tolerance: 51.143499ms
	I0127 11:33:22.743826   60315 start.go:83] releasing machines lock for "kubernetes-upgrade-480798", held for 24.039369618s
	I0127 11:33:22.743855   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:22.744108   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetIP
	I0127 11:33:22.747017   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.747432   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.747465   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.747654   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:22.748150   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:22.748314   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .DriverName
	I0127 11:33:22.748400   60315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:33:22.748442   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:22.748493   60315 ssh_runner.go:195] Run: cat /version.json
	I0127 11:33:22.748520   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHHostname
	I0127 11:33:22.751346   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.751427   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.751748   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.751787   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:22.751810   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.751835   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:22.752032   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:22.752116   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHPort
	I0127 11:33:22.752194   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.752267   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHKeyPath
	I0127 11:33:22.752322   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:22.752390   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetSSHUsername
	I0127 11:33:22.752456   60315 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa Username:docker}
	I0127 11:33:22.752503   60315 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kubernetes-upgrade-480798/id_rsa Username:docker}
	I0127 11:33:22.858090   60315 ssh_runner.go:195] Run: systemctl --version
	I0127 11:33:22.864039   60315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:33:23.026680   60315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:33:23.033246   60315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:33:23.033317   60315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:33:23.052267   60315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:33:23.052297   60315 start.go:495] detecting cgroup driver to use...
	I0127 11:33:23.052381   60315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:33:23.074714   60315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:33:23.094637   60315 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:33:23.094694   60315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:33:23.107597   60315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:33:23.120523   60315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:33:23.246998   60315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:33:23.415148   60315 docker.go:233] disabling docker service ...
	I0127 11:33:23.415217   60315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:33:23.431452   60315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:33:23.444445   60315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:33:23.582380   60315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:33:23.708208   60315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:33:23.723641   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:33:23.744590   60315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 11:33:23.744648   60315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:23.756620   60315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:33:23.756682   60315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:23.766948   60315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:23.781792   60315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:23.793544   60315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:33:23.807081   60315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:33:23.822265   60315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:33:23.822343   60315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:33:23.839474   60315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:33:23.850429   60315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:23.967876   60315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:33:24.065449   60315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:33:24.065529   60315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:33:24.070032   60315 start.go:563] Will wait 60s for crictl version
	I0127 11:33:24.070088   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:24.073472   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:33:24.108008   60315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:33:24.108090   60315 ssh_runner.go:195] Run: crio --version
	I0127 11:33:24.138157   60315 ssh_runner.go:195] Run: crio --version
	I0127 11:33:24.165422   60315 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 11:33:24.166712   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetIP
	I0127 11:33:24.169394   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:24.169753   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:24.169776   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:24.169978   60315 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:24.173899   60315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:33:24.185980   60315 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:33:24.186105   60315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:33:24.186163   60315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:24.217311   60315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:33:24.217389   60315 ssh_runner.go:195] Run: which lz4
	I0127 11:33:24.221400   60315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:33:24.225509   60315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:33:24.225538   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:33:25.723293   60315 crio.go:462] duration metric: took 1.501912534s to copy over tarball
	I0127 11:33:25.723373   60315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:33:28.192368   60315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.468964657s)
	I0127 11:33:28.192394   60315 crio.go:469] duration metric: took 2.469070397s to extract the tarball
	I0127 11:33:28.192404   60315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:33:28.233159   60315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:28.276108   60315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:33:28.276139   60315 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:33:28.276238   60315 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:28.276244   60315 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.276271   60315 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.276275   60315 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:33:28.276286   60315 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.276247   60315 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.276298   60315 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.276254   60315 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.277901   60315 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.277925   60315 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.277903   60315 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:28.277902   60315 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.277901   60315 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.277907   60315 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:33:28.277927   60315 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.277902   60315 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.427872   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.428061   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.434839   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.440570   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.457387   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.459239   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.500292   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:33:28.519394   60315 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:33:28.519450   60315 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.519466   60315 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:33:28.519501   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.519501   60315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.519631   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.524827   60315 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:33:28.524864   60315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.524907   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.572604   60315 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:33:28.572660   60315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.572701   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.594543   60315 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:33:28.594591   60315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.594640   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.594676   60315 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:33:28.594711   60315 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.594744   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.604978   60315 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:33:28.605007   60315 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:33:28.605028   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.605042   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.605103   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.605161   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.605178   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.605235   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.605280   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.725558   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.725597   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.725601   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.725707   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.725760   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.725793   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:28.725820   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.841925   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.862363   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.869074   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.869108   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.869120   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.869200   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.869288   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:28.933917   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:33:28.985479   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:33:28.998015   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:33:29.008764   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:33:29.008846   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:29.012783   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:33:29.012861   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:33:29.047571   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:33:29.222794   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:29.366916   60315 cache_images.go:92] duration metric: took 1.090751434s to LoadCachedImages
	W0127 11:33:29.367016   60315 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0127 11:33:29.367036   60315 kubeadm.go:934] updating node { 192.168.83.73 8443 v1.20.0 crio true true} ...
	I0127 11:33:29.367182   60315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-480798 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:33:29.367285   60315 ssh_runner.go:195] Run: crio config
	I0127 11:33:29.430210   60315 cni.go:84] Creating CNI manager for ""
	I0127 11:33:29.430230   60315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:29.430239   60315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:33:29.430257   60315 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.73 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-480798 NodeName:kubernetes-upgrade-480798 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:33:29.430387   60315 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-480798"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:33:29.430463   60315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:33:29.440428   60315 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:33:29.440483   60315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:33:29.450433   60315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0127 11:33:29.466059   60315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:33:29.480733   60315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:33:29.497078   60315 ssh_runner.go:195] Run: grep 192.168.83.73	control-plane.minikube.internal$ /etc/hosts
	I0127 11:33:29.500859   60315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:33:29.514576   60315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:29.643945   60315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:33:29.663067   60315 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798 for IP: 192.168.83.73
	I0127 11:33:29.663088   60315 certs.go:194] generating shared ca certs ...
	I0127 11:33:29.663106   60315 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.663261   60315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:33:29.663315   60315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:33:29.663336   60315 certs.go:256] generating profile certs ...
	I0127 11:33:29.663446   60315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key
	I0127 11:33:29.663471   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt with IP's: []
	I0127 11:33:29.800004   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt ...
	I0127 11:33:29.800038   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt: {Name:mkaa6ca211b0e39160992b60e71795f794b4fa57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.800243   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key ...
	I0127 11:33:29.800267   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key: {Name:mkba3526bbc1c913be01a6bc4ce4e3baf78ed28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.800412   60315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c
	I0127 11:33:29.800436   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.73]
	I0127 11:33:29.963202   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c ...
	I0127 11:33:29.963227   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c: {Name:mk647f7a7f5a0dabbc21fe291d29db85829b422f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.963364   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c ...
	I0127 11:33:29.963378   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c: {Name:mkbbe66814ffa44807139b1c6c8df1cbfe9d85f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.963443   60315 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt
	I0127 11:33:29.963520   60315 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key
	I0127 11:33:29.963577   60315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key
	I0127 11:33:29.963591   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt with IP's: []
	I0127 11:33:30.061333   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt ...
	I0127 11:33:30.061361   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt: {Name:mk13a4dcb74d04f521c59b139c0faacce5465377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:30.061519   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key ...
	I0127 11:33:30.061536   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key: {Name:mk309b6c9e6da261ab0aecbaa4e7871ee8cdd22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:30.061732   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:33:30.061781   60315 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:33:30.061794   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:33:30.061833   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:33:30.061869   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:33:30.061901   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:33:30.061956   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:30.062539   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:33:30.094530   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:33:30.121255   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:33:30.147323   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:33:30.175961   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 11:33:30.203496   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:33:30.228391   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:33:30.256448   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:33:30.282667   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:33:30.305716   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:33:30.334009   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:33:30.359897   60315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:33:30.379030   60315 ssh_runner.go:195] Run: openssl version
	I0127 11:33:30.386661   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:33:30.399285   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.404091   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.404156   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.411200   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:33:30.424652   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:33:30.439225   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.444544   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.444608   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.451131   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:33:30.465772   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:33:30.476581   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.481472   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.481535   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.487353   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:33:30.502830   60315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:33:30.508448   60315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:33:30.508513   60315 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:30.508611   60315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:33:30.508664   60315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:33:30.563952   60315 cri.go:89] found id: ""
	I0127 11:33:30.564015   60315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:33:30.580363   60315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:33:30.601987   60315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:33:30.620766   60315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:33:30.620788   60315 kubeadm.go:157] found existing configuration files:
	
	I0127 11:33:30.620841   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:33:30.634563   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:33:30.634639   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:33:30.645365   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:33:30.657896   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:33:30.657960   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:33:30.669588   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:33:30.679304   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:33:30.679367   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:33:30.688972   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:33:30.697895   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:33:30.697950   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:33:30.708950   60315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:33:30.840143   60315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:33:30.840245   60315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:33:30.968066   60315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:33:30.968191   60315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:33:30.968338   60315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:33:31.140896   60315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:33:31.293692   60315 out.go:235]   - Generating certificates and keys ...
	I0127 11:33:31.293813   60315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:33:31.293920   60315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:33:31.294024   60315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:33:31.694694   60315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:33:31.821080   60315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:33:32.143166   60315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:33:32.197137   60315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:33:32.197479   60315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	I0127 11:33:32.425895   60315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:33:32.426224   60315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	I0127 11:33:32.589528   60315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:33:32.778137   60315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:33:33.160573   60315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:33:33.160670   60315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:33:33.224218   60315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:33:33.788353   60315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:33:33.899841   60315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:33:33.976565   60315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:33:33.993549   60315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:33:33.994045   60315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:33:33.994107   60315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:33:34.115038   60315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:33:34.192855   60315 out.go:235]   - Booting up control plane ...
	I0127 11:33:34.193031   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:33:34.193145   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:33:34.193251   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:33:34.193399   60315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:33:34.193617   60315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:34:14.086807   60315 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:34:14.087113   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:34:14.087318   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:34:19.087268   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:34:19.087464   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:34:29.086931   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:34:29.087166   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:34:49.087079   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:34:49.087414   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:35:29.088250   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:35:29.088494   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:35:29.088512   60315 kubeadm.go:310] 
	I0127 11:35:29.088553   60315 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:35:29.088620   60315 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:35:29.088631   60315 kubeadm.go:310] 
	I0127 11:35:29.088688   60315 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:35:29.088726   60315 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:35:29.088828   60315 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:35:29.088838   60315 kubeadm.go:310] 
	I0127 11:35:29.088997   60315 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:35:29.089056   60315 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:35:29.089115   60315 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:35:29.089128   60315 kubeadm.go:310] 
	I0127 11:35:29.089266   60315 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:35:29.089341   60315 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:35:29.089348   60315 kubeadm.go:310] 
	I0127 11:35:29.089496   60315 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:35:29.089629   60315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:35:29.089720   60315 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:35:29.089825   60315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:35:29.089868   60315 kubeadm.go:310] 
	I0127 11:35:29.090041   60315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:35:29.090115   60315 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:35:29.090193   60315 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 11:35:29.090298   60315 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 11:35:29.090336   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:35:29.532310   60315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:35:29.545283   60315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:35:29.555109   60315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:35:29.555127   60315 kubeadm.go:157] found existing configuration files:
	
	I0127 11:35:29.555177   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:35:29.563261   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:35:29.563316   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:35:29.571495   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:35:29.579279   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:35:29.579324   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:35:29.587492   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:35:29.595435   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:35:29.595475   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:35:29.603506   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:35:29.611160   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:35:29.611207   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:35:29.619358   60315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:35:29.683796   60315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:35:29.683905   60315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:35:29.829619   60315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:35:29.829877   60315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:35:29.830094   60315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:35:30.007795   60315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:35:30.009560   60315 out.go:235]   - Generating certificates and keys ...
	I0127 11:35:30.009674   60315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:35:30.009739   60315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:35:30.009821   60315 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:35:30.009889   60315 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:35:30.010005   60315 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:35:30.010113   60315 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:35:30.010202   60315 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:35:30.010512   60315 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:35:30.010887   60315 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:35:30.011280   60315 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:35:30.011339   60315 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:35:30.011389   60315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:35:30.196584   60315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:35:30.326630   60315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:35:30.448289   60315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:35:30.681486   60315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:35:30.701008   60315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:35:30.702870   60315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:35:30.702941   60315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:35:30.839873   60315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:35:30.842619   60315 out.go:235]   - Booting up control plane ...
	I0127 11:35:30.842756   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:35:30.849463   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:35:30.850816   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:35:30.851806   60315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:35:30.854764   60315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:36:10.855298   60315 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:36:10.855873   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:36:10.856141   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:36:15.856343   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:36:15.856625   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:36:25.856800   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:36:25.857066   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:36:45.857686   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:36:45.857907   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:37:25.859527   60315 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:37:25.859809   60315 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:37:25.859828   60315 kubeadm.go:310] 
	I0127 11:37:25.859868   60315 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:37:25.859926   60315 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:37:25.859938   60315 kubeadm.go:310] 
	I0127 11:37:25.859993   60315 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:37:25.860037   60315 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:37:25.860219   60315 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:37:25.860242   60315 kubeadm.go:310] 
	I0127 11:37:25.860387   60315 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:37:25.860437   60315 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:37:25.860468   60315 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:37:25.860474   60315 kubeadm.go:310] 
	I0127 11:37:25.860625   60315 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:37:25.860735   60315 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:37:25.860747   60315 kubeadm.go:310] 
	I0127 11:37:25.860896   60315 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:37:25.861016   60315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:37:25.861141   60315 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:37:25.861254   60315 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:37:25.861264   60315 kubeadm.go:310] 
	I0127 11:37:25.862089   60315 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:37:25.862224   60315 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:37:25.862313   60315 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:37:25.862390   60315 kubeadm.go:394] duration metric: took 3m55.353881651s to StartCluster
	I0127 11:37:25.862505   60315 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:37:25.862648   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:37:25.907751   60315 cri.go:89] found id: ""
	I0127 11:37:25.907780   60315 logs.go:282] 0 containers: []
	W0127 11:37:25.907792   60315 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:37:25.907799   60315 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:37:25.907867   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:37:25.952162   60315 cri.go:89] found id: ""
	I0127 11:37:25.952188   60315 logs.go:282] 0 containers: []
	W0127 11:37:25.952196   60315 logs.go:284] No container was found matching "etcd"
	I0127 11:37:25.952212   60315 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:37:25.952268   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:37:25.995244   60315 cri.go:89] found id: ""
	I0127 11:37:25.995287   60315 logs.go:282] 0 containers: []
	W0127 11:37:25.995298   60315 logs.go:284] No container was found matching "coredns"
	I0127 11:37:25.995306   60315 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:37:25.995386   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:37:26.037644   60315 cri.go:89] found id: ""
	I0127 11:37:26.037677   60315 logs.go:282] 0 containers: []
	W0127 11:37:26.037689   60315 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:37:26.037696   60315 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:37:26.037770   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:37:26.075839   60315 cri.go:89] found id: ""
	I0127 11:37:26.075877   60315 logs.go:282] 0 containers: []
	W0127 11:37:26.075890   60315 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:37:26.075900   60315 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:37:26.075969   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:37:26.113546   60315 cri.go:89] found id: ""
	I0127 11:37:26.113576   60315 logs.go:282] 0 containers: []
	W0127 11:37:26.113583   60315 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:37:26.113590   60315 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:37:26.113648   60315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:37:26.145553   60315 cri.go:89] found id: ""
	I0127 11:37:26.145583   60315 logs.go:282] 0 containers: []
	W0127 11:37:26.145597   60315 logs.go:284] No container was found matching "kindnet"
	I0127 11:37:26.145606   60315 logs.go:123] Gathering logs for kubelet ...
	I0127 11:37:26.145619   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:37:26.214078   60315 logs.go:123] Gathering logs for dmesg ...
	I0127 11:37:26.214118   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:37:26.230203   60315 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:37:26.230238   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:37:26.395760   60315 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:37:26.395845   60315 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:37:26.395871   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:37:26.513959   60315 logs.go:123] Gathering logs for container status ...
	I0127 11:37:26.513994   60315 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 11:37:26.555812   60315 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 11:37:26.555878   60315 out.go:270] * 
	* 
	W0127 11:37:26.555946   60315 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:37:26.555966   60315 out.go:270] * 
	* 
	W0127 11:37:26.557143   60315 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 11:37:26.560646   60315 out.go:201] 
	W0127 11:37:26.561894   60315 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:37:26.561961   60315 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 11:37:26.561993   60315 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 11:37:26.563482   60315 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-480798
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-480798: (1.836417587s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-480798 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-480798 status --format={{.Host}}: exit status 7 (77.544874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 11:37:34.555851   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.277318027s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-480798 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.970742ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-480798] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-480798
	    minikube start -p kubernetes-upgrade-480798 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4807982 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-480798 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480798 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.771497722s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-27 11:39:32.728508247 +0000 UTC m=+4051.793802308
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-480798 -n kubernetes-upgrade-480798
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-480798 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-480798 logs -n 25: (1.587578786s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-673007                      | cilium-673007             | jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	| start   | -p cert-expiration-091274             | cert-expiration-091274    | jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-943115             | stopped-upgrade-943115    | jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	| start   | -p force-systemd-flag-723290          | force-systemd-flag-723290 | jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:36 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-200407                | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:36 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-200407                | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	| start   | -p NoKubernetes-200407                | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-723290 ssh cat     | force-systemd-flag-723290 | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-723290          | force-systemd-flag-723290 | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	| start   | -p cert-options-901069                | cert-options-901069       | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-200407 sudo           | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-200407                | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:37 UTC |
	| start   | -p NoKubernetes-200407                | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-480798          | kubernetes-upgrade-480798 | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	| start   | -p kubernetes-upgrade-480798          | kubernetes-upgrade-480798 | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:38 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-901069 ssh               | cert-options-901069       | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-901069 -- sudo        | cert-options-901069       | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-901069                | cert-options-901069       | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	| start   | -p old-k8s-version-570778             | old-k8s-version-570778    | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-200407 sudo           | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-200407                | NoKubernetes-200407       | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	| start   | -p no-preload-273200                  | no-preload-273200         | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798          | kubernetes-upgrade-480798 | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798          | kubernetes-upgrade-480798 | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-091274             | cert-expiration-091274    | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:39:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:39:12.093415   67633 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:39:12.093532   67633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:39:12.093536   67633 out.go:358] Setting ErrFile to fd 2...
	I0127 11:39:12.093540   67633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:39:12.093747   67633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:39:12.094267   67633 out.go:352] Setting JSON to false
	I0127 11:39:12.095255   67633 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8452,"bootTime":1737969500,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:39:12.095346   67633 start.go:139] virtualization: kvm guest
	I0127 11:39:12.097949   67633 out.go:177] * [cert-expiration-091274] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:39:12.099427   67633 notify.go:220] Checking for updates...
	I0127 11:39:12.099444   67633 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:39:12.100787   67633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:39:12.101984   67633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:39:12.103226   67633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:39:12.104607   67633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:39:12.105751   67633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:39:12.107197   67633 config.go:182] Loaded profile config "cert-expiration-091274": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:39:12.107714   67633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:12.107755   67633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:12.124891   67633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0127 11:39:12.125338   67633 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:12.125937   67633 main.go:141] libmachine: Using API Version  1
	I0127 11:39:12.125954   67633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:12.126293   67633 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:12.126501   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:12.126781   67633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:39:12.127284   67633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:12.127324   67633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:12.148838   67633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0127 11:39:12.149332   67633 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:12.149902   67633 main.go:141] libmachine: Using API Version  1
	I0127 11:39:12.149922   67633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:12.150488   67633 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:12.150724   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:12.192729   67633 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:39:12.194131   67633 start.go:297] selected driver: kvm2
	I0127 11:39:12.194140   67633 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-091274 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-
091274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:39:12.194249   67633 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:39:12.194933   67633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:39:12.194999   67633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:39:12.210076   67633 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:39:12.210494   67633 cni.go:84] Creating CNI manager for ""
	I0127 11:39:12.210529   67633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:39:12.210570   67633 start.go:340] cluster config:
	{Name:cert-expiration-091274 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-091274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:39:12.210654   67633 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:39:12.212290   67633 out.go:177] * Starting "cert-expiration-091274" primary control-plane node in "cert-expiration-091274" cluster
	I0127 11:39:12.213479   67633 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:39:12.213501   67633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:39:12.213506   67633 cache.go:56] Caching tarball of preloaded images
	I0127 11:39:12.213593   67633 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:39:12.213600   67633 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 11:39:12.213672   67633 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/config.json ...
	I0127 11:39:12.213838   67633 start.go:360] acquireMachinesLock for cert-expiration-091274: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:39:12.213869   67633 start.go:364] duration metric: took 21.318µs to acquireMachinesLock for "cert-expiration-091274"
	I0127 11:39:12.213878   67633 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:39:12.213896   67633 fix.go:54] fixHost starting: 
	I0127 11:39:12.214162   67633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:12.214186   67633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:12.228455   67633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0127 11:39:12.228811   67633 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:12.229208   67633 main.go:141] libmachine: Using API Version  1
	I0127 11:39:12.229223   67633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:12.229483   67633 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:12.229660   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:12.229765   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetState
	I0127 11:39:12.231286   67633 fix.go:112] recreateIfNeeded on cert-expiration-091274: state=Running err=<nil>
	W0127 11:39:12.231298   67633 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:39:12.232999   67633 out.go:177] * Updating the running kvm2 "cert-expiration-091274" VM ...
	I0127 11:39:09.746626   66811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:39:09.755688   66811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 11:39:09.771214   66811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:39:09.786644   66811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0127 11:39:09.803710   66811 ssh_runner.go:195] Run: grep 192.168.61.181	control-plane.minikube.internal$ /etc/hosts
	I0127 11:39:09.810241   66811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:39:09.824331   66811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:39:09.942265   66811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:39:09.961004   66811 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200 for IP: 192.168.61.181
	I0127 11:39:09.961023   66811 certs.go:194] generating shared ca certs ...
	I0127 11:39:09.961038   66811 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:09.961184   66811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:39:09.961222   66811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:39:09.961232   66811 certs.go:256] generating profile certs ...
	I0127 11:39:09.961282   66811 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.key
	I0127 11:39:09.961295   66811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt with IP's: []
	I0127 11:39:10.044459   66811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt ...
	I0127 11:39:10.044487   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: {Name:mk3155ac2c0ae33cc866106bb78c71c3af6384d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:10.044656   66811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.key ...
	I0127 11:39:10.044667   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.key: {Name:mkf871c5d8bfb95ba19361c4c77ffa626dd91d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:10.044747   66811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key.47cca791
	I0127 11:39:10.044761   66811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt.47cca791 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.181]
	I0127 11:39:10.302740   66811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt.47cca791 ...
	I0127 11:39:10.302771   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt.47cca791: {Name:mkcee9f4567a638ed09f69db74b08f6620715661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:10.302929   66811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key.47cca791 ...
	I0127 11:39:10.302943   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key.47cca791: {Name:mkad9511e1977f014e3db9d5de7158cea925174b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:10.303009   66811 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt.47cca791 -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt
	I0127 11:39:10.303085   66811 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key.47cca791 -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key
	I0127 11:39:10.303136   66811 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.key
	I0127 11:39:10.303151   66811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.crt with IP's: []
	I0127 11:39:10.503180   66811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.crt ...
	I0127 11:39:10.503208   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.crt: {Name:mk3faf4f79c73604bf0cbdb3b93744c8fb3ce0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:10.503374   66811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.key ...
	I0127 11:39:10.503389   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.key: {Name:mk1534802ebdae98293352192e56e0d95eef4442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:10.503556   66811 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:39:10.503592   66811 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:39:10.503599   66811 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:39:10.503649   66811 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:39:10.503675   66811 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:39:10.503695   66811 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:39:10.503732   66811 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:39:10.504301   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:39:10.529025   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:39:10.551651   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:39:10.573801   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:39:10.596544   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 11:39:10.619389   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:39:10.646985   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:39:10.678874   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:39:10.701783   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:39:10.725376   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:39:10.747453   66811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:39:10.771619   66811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:39:10.787713   66811 ssh_runner.go:195] Run: openssl version
	I0127 11:39:10.793611   66811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:39:10.804947   66811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:39:10.809227   66811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:39:10.809292   66811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:39:10.814779   66811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:39:10.826107   66811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:39:10.837729   66811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:39:10.842079   66811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:39:10.842131   66811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:39:10.847695   66811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:39:10.859505   66811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:39:10.871237   66811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:39:10.875575   66811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:39:10.875647   66811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:39:10.881166   66811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:39:10.891664   66811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:39:10.895482   66811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:39:10.895532   66811 kubeadm.go:392] StartCluster: {Name:no-preload-273200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-273200 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:39:10.895597   66811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:39:10.895684   66811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:39:10.932906   66811 cri.go:89] found id: ""
	I0127 11:39:10.932982   66811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:39:10.943080   66811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:39:10.952795   66811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:39:10.962088   66811 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:39:10.962111   66811 kubeadm.go:157] found existing configuration files:
	
	I0127 11:39:10.962160   66811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:39:10.971005   66811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:39:10.971075   66811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:39:10.980697   66811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:39:10.990118   66811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:39:10.990188   66811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:39:11.000202   66811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:39:11.009598   66811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:39:11.009647   66811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:39:11.018970   66811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:39:11.027717   66811 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:39:11.027773   66811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:39:11.038362   66811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:39:11.179630   66811 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:39:12.234269   67633 machine.go:93] provisionDockerMachine start ...
	I0127 11:39:12.234278   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:12.234483   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:12.237018   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.237412   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:12.237432   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.237546   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:12.237694   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.237831   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.237954   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:12.238102   67633 main.go:141] libmachine: Using SSH client type: native
	I0127 11:39:12.238256   67633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0127 11:39:12.238260   67633 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:39:12.339922   67633 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-091274
	
	I0127 11:39:12.339952   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetMachineName
	I0127 11:39:12.340222   67633 buildroot.go:166] provisioning hostname "cert-expiration-091274"
	I0127 11:39:12.340241   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetMachineName
	I0127 11:39:12.340444   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:12.343414   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.343849   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:12.343871   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.344069   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:12.344220   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.344344   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.344459   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:12.344626   67633 main.go:141] libmachine: Using SSH client type: native
	I0127 11:39:12.344799   67633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0127 11:39:12.344809   67633 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-091274 && echo "cert-expiration-091274" | sudo tee /etc/hostname
	I0127 11:39:12.461299   67633 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-091274
	
	I0127 11:39:12.461319   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:12.464080   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.464510   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:12.464536   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.464703   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:12.464886   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.465031   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.465180   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:12.465314   67633 main.go:141] libmachine: Using SSH client type: native
	I0127 11:39:12.465493   67633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0127 11:39:12.465503   67633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-091274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-091274/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-091274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:39:12.574804   67633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:39:12.574824   67633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:39:12.574840   67633 buildroot.go:174] setting up certificates
	I0127 11:39:12.574846   67633 provision.go:84] configureAuth start
	I0127 11:39:12.574855   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetMachineName
	I0127 11:39:12.575139   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetIP
	I0127 11:39:12.577612   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.577941   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:12.577958   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.578138   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:12.580195   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.580534   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:12.580557   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.580700   67633 provision.go:143] copyHostCerts
	I0127 11:39:12.580746   67633 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:39:12.580761   67633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:39:12.580815   67633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:39:12.580895   67633 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:39:12.580898   67633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:39:12.580917   67633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:39:12.580961   67633 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:39:12.580964   67633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:39:12.580980   67633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:39:12.581020   67633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-091274 san=[127.0.0.1 192.168.39.30 cert-expiration-091274 localhost minikube]
	I0127 11:39:12.877020   67633 provision.go:177] copyRemoteCerts
	I0127 11:39:12.877060   67633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:39:12.877079   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:12.880046   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.880425   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:12.880443   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:12.880672   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:12.880859   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:12.880995   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:12.881102   67633 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/cert-expiration-091274/id_rsa Username:docker}
	I0127 11:39:12.965490   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:39:12.992640   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:39:13.015256   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:39:13.042445   67633 provision.go:87] duration metric: took 467.588956ms to configureAuth
	I0127 11:39:13.042459   67633 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:39:13.042617   67633 config.go:182] Loaded profile config "cert-expiration-091274": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:39:13.042675   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:13.045427   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:13.045734   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:13.045754   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:13.045975   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:13.046128   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:13.046241   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:13.046332   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:13.046467   67633 main.go:141] libmachine: Using SSH client type: native
	I0127 11:39:13.046605   67633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0127 11:39:13.046614   67633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:39:18.581778   67633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:39:18.581794   67633 machine.go:96] duration metric: took 6.347518393s to provisionDockerMachine
	I0127 11:39:18.581806   67633 start.go:293] postStartSetup for "cert-expiration-091274" (driver="kvm2")
	I0127 11:39:18.581819   67633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:39:18.581857   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:18.582182   67633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:39:18.582201   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:18.585253   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.585685   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:18.585706   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.585904   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:18.586100   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:18.586281   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:18.586414   67633 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/cert-expiration-091274/id_rsa Username:docker}
	I0127 11:39:18.669811   67633 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:39:18.674491   67633 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:39:18.674512   67633 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:39:18.674594   67633 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:39:18.674676   67633 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:39:18.674772   67633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:39:18.684797   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:39:18.711253   67633 start.go:296] duration metric: took 129.432057ms for postStartSetup
	I0127 11:39:18.711281   67633 fix.go:56] duration metric: took 6.497398125s for fixHost
	I0127 11:39:18.711312   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:18.714517   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.715041   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:18.715066   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.715277   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:18.715479   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:18.715681   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:18.715844   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:18.716017   67633 main.go:141] libmachine: Using SSH client type: native
	I0127 11:39:18.716185   67633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0127 11:39:18.716190   67633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:39:18.820973   67633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977958.809855535
	
	I0127 11:39:18.820986   67633 fix.go:216] guest clock: 1737977958.809855535
	I0127 11:39:18.820995   67633 fix.go:229] Guest: 2025-01-27 11:39:18.809855535 +0000 UTC Remote: 2025-01-27 11:39:18.711297999 +0000 UTC m=+6.656859102 (delta=98.557536ms)
	I0127 11:39:18.821036   67633 fix.go:200] guest clock delta is within tolerance: 98.557536ms
	I0127 11:39:18.821040   67633 start.go:83] releasing machines lock for "cert-expiration-091274", held for 6.60716612s
	I0127 11:39:18.821063   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:18.821352   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetIP
	I0127 11:39:18.824356   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.824701   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:18.824723   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.824898   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:18.825479   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:18.825684   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .DriverName
	I0127 11:39:18.825768   67633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:39:18.825806   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:18.825895   67633 ssh_runner.go:195] Run: cat /version.json
	I0127 11:39:18.825909   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHHostname
	I0127 11:39:18.828524   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.828875   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.828903   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:18.828923   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.829082   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:18.829292   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:18.829315   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:18.829363   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:18.829450   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:18.829611   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHPort
	I0127 11:39:18.829621   67633 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/cert-expiration-091274/id_rsa Username:docker}
	I0127 11:39:18.829753   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHKeyPath
	I0127 11:39:18.829876   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetSSHUsername
	I0127 11:39:18.830021   67633 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/cert-expiration-091274/id_rsa Username:docker}
	I0127 11:39:18.928545   67633 ssh_runner.go:195] Run: systemctl --version
	I0127 11:39:18.934972   67633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:39:19.088563   67633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:39:19.094067   67633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:39:19.094119   67633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:39:19.102515   67633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 11:39:19.102527   67633 start.go:495] detecting cgroup driver to use...
	I0127 11:39:19.102588   67633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:39:19.117385   67633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:39:19.130582   67633 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:39:19.130615   67633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:39:19.143439   67633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:39:19.157048   67633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:39:19.299012   67633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:39:19.443438   67633 docker.go:233] disabling docker service ...
	I0127 11:39:19.443497   67633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:39:19.466421   67633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:39:19.482542   67633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:39:19.621902   67633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:39:19.749369   67633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:39:19.768738   67633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:39:19.788197   67633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:39:19.788259   67633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.800656   67633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:39:19.800713   67633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.811033   67633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.821184   67633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.831631   67633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:39:19.842876   67633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.853690   67633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.867127   67633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:39:19.877941   67633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:39:19.887441   67633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:39:19.898932   67633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:39:20.037911   67633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:39:20.253508   67633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:39:20.253608   67633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:39:20.258402   67633 start.go:563] Will wait 60s for crictl version
	I0127 11:39:20.258443   67633 ssh_runner.go:195] Run: which crictl
	I0127 11:39:20.262273   67633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:39:20.298732   67633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:39:20.298802   67633 ssh_runner.go:195] Run: crio --version
	I0127 11:39:20.326770   67633 ssh_runner.go:195] Run: crio --version
	I0127 11:39:20.359025   67633 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:39:20.549106   66811 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:39:20.549173   66811 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:39:20.549249   66811 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:39:20.549417   66811 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:39:20.549506   66811 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:39:20.549566   66811 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:39:20.551176   66811 out.go:235]   - Generating certificates and keys ...
	I0127 11:39:20.551263   66811 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:39:20.551321   66811 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:39:20.551395   66811 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:39:20.551481   66811 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:39:20.551575   66811 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:39:20.551663   66811 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:39:20.551748   66811 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:39:20.551931   66811 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-273200] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0127 11:39:20.551985   66811 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:39:20.552152   66811 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-273200] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0127 11:39:20.552246   66811 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:39:20.552363   66811 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:39:20.552435   66811 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:39:20.552523   66811 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:39:20.552570   66811 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:39:20.552637   66811 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:39:20.552705   66811 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:39:20.552788   66811 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:39:20.552873   66811 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:39:20.552990   66811 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:39:20.553089   66811 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:39:20.555539   66811 out.go:235]   - Booting up control plane ...
	I0127 11:39:20.555703   66811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:39:20.555818   66811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:39:20.555910   66811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:39:20.556064   66811 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:39:20.556218   66811 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:39:20.556283   66811 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:39:20.556473   66811 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:39:20.556637   66811 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:39:20.556740   66811 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001825592s
	I0127 11:39:20.556862   66811 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:39:20.556927   66811 kubeadm.go:310] [api-check] The API server is healthy after 4.501691851s
	I0127 11:39:20.557034   66811 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:39:20.557203   66811 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:39:20.557282   66811 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:39:20.557476   66811 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-273200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:39:20.557536   66811 kubeadm.go:310] [bootstrap-token] Using token: kug9d1.1kzl9tjp8z235hmk
	I0127 11:39:20.558961   66811 out.go:235]   - Configuring RBAC rules ...
	I0127 11:39:20.559070   66811 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:39:20.559141   66811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:39:20.559299   66811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:39:20.559485   66811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:39:20.559595   66811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:39:20.559726   66811 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:39:20.559891   66811 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:39:20.559963   66811 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:39:20.560031   66811 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:39:20.560041   66811 kubeadm.go:310] 
	I0127 11:39:20.560103   66811 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:39:20.560111   66811 kubeadm.go:310] 
	I0127 11:39:20.560221   66811 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:39:20.560237   66811 kubeadm.go:310] 
	I0127 11:39:20.560268   66811 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:39:20.560318   66811 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:39:20.560363   66811 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:39:20.560369   66811 kubeadm.go:310] 
	I0127 11:39:20.560412   66811 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:39:20.560418   66811 kubeadm.go:310] 
	I0127 11:39:20.560457   66811 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:39:20.560463   66811 kubeadm.go:310] 
	I0127 11:39:20.560510   66811 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:39:20.560578   66811 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:39:20.560684   66811 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:39:20.560700   66811 kubeadm.go:310] 
	I0127 11:39:20.560808   66811 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:39:20.560882   66811 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:39:20.560888   66811 kubeadm.go:310] 
	I0127 11:39:20.560955   66811 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kug9d1.1kzl9tjp8z235hmk \
	I0127 11:39:20.561053   66811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:39:20.561093   66811 kubeadm.go:310] 	--control-plane 
	I0127 11:39:20.561102   66811 kubeadm.go:310] 
	I0127 11:39:20.561232   66811 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:39:20.561242   66811 kubeadm.go:310] 
	I0127 11:39:20.561338   66811 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kug9d1.1kzl9tjp8z235hmk \
	I0127 11:39:20.561447   66811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:39:20.561457   66811 cni.go:84] Creating CNI manager for ""
	I0127 11:39:20.561463   66811 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:39:20.562957   66811 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:39:20.360364   67633 main.go:141] libmachine: (cert-expiration-091274) Calling .GetIP
	I0127 11:39:20.363029   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:20.363341   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:cd:8e", ip: ""} in network mk-cert-expiration-091274: {Iface:virbr2 ExpiryTime:2025-01-27 12:35:45 +0000 UTC Type:0 Mac:52:54:00:a9:cd:8e Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:cert-expiration-091274 Clientid:01:52:54:00:a9:cd:8e}
	I0127 11:39:20.363360   67633 main.go:141] libmachine: (cert-expiration-091274) DBG | domain cert-expiration-091274 has defined IP address 192.168.39.30 and MAC address 52:54:00:a9:cd:8e in network mk-cert-expiration-091274
	I0127 11:39:20.363560   67633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 11:39:20.367478   67633 kubeadm.go:883] updating cluster {Name:cert-expiration-091274 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-091274 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:39:20.367553   67633 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:39:20.367586   67633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:39:20.410959   67633 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:39:20.410974   67633 crio.go:433] Images already preloaded, skipping extraction
	I0127 11:39:20.411040   67633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:39:20.443943   67633 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:39:20.443956   67633 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:39:20.443963   67633 kubeadm.go:934] updating node { 192.168.39.30 8443 v1.32.1 crio true true} ...
	I0127 11:39:20.444042   67633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-091274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-091274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:39:20.444099   67633 ssh_runner.go:195] Run: crio config
	I0127 11:39:20.491891   67633 cni.go:84] Creating CNI manager for ""
	I0127 11:39:20.491903   67633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:39:20.491912   67633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:39:20.491932   67633 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.30 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-091274 NodeName:cert-expiration-091274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:39:20.492047   67633 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-091274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:39:20.492097   67633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:39:20.502759   67633 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:39:20.502823   67633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:39:20.512550   67633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0127 11:39:20.529765   67633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:39:20.545910   67633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0127 11:39:20.563993   67633 ssh_runner.go:195] Run: grep 192.168.39.30	control-plane.minikube.internal$ /etc/hosts
	I0127 11:39:20.568170   67633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:39:20.772800   67633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:39:20.831709   67633 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274 for IP: 192.168.39.30
	I0127 11:39:20.831720   67633 certs.go:194] generating shared ca certs ...
	I0127 11:39:20.831736   67633 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:20.831888   67633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:39:20.831920   67633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:39:20.831925   67633 certs.go:256] generating profile certs ...
	W0127 11:39:20.832029   67633 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0127 11:39:20.832048   67633 certs.go:624] cert expired /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.crt: expiration: 2025-01-27 11:39:00 +0000 UTC, now: 2025-01-27 11:39:20.832043663 +0000 UTC m=+8.777604774
	I0127 11:39:20.832151   67633 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.key
	I0127 11:39:20.832172   67633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.crt with IP's: []
	I0127 11:39:21.028782   67633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.crt ...
	I0127 11:39:21.028796   67633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.crt: {Name:mkd8040274eaf11476d1fb7029d2302bad76cda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:21.028989   67633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.key ...
	I0127 11:39:21.028999   67633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/client.key: {Name:mka2822df9d074d9cd573f93f6990435887d74a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0127 11:39:21.029183   67633 out.go:270] ! Certificate apiserver.crt.70eade73 has expired. Generating a new one...
	I0127 11:39:21.029209   67633 certs.go:624] cert expired /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt.70eade73: expiration: 2025-01-27 11:39:00 +0000 UTC, now: 2025-01-27 11:39:21.029203255 +0000 UTC m=+8.974764360
	I0127 11:39:21.029320   67633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.key.70eade73
	I0127 11:39:21.029337   67633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt.70eade73 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.30]
	I0127 11:39:21.311256   67633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt.70eade73 ...
	I0127 11:39:21.311270   67633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt.70eade73: {Name:mke47301903894acb76c38e723371132ba5d2bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:21.311395   67633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.key.70eade73 ...
	I0127 11:39:21.311403   67633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.key.70eade73: {Name:mkda3f9b9ce2a9f5253bcf4b374e2fdbf4cad978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:21.311457   67633 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt.70eade73 -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt
	I0127 11:39:21.311585   67633 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.key.70eade73 -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.key
	W0127 11:39:21.311781   67633 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0127 11:39:21.311798   67633 certs.go:624] cert expired /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.crt: expiration: 2025-01-27 11:39:00 +0000 UTC, now: 2025-01-27 11:39:21.311793957 +0000 UTC m=+9.257355063
	I0127 11:39:21.311851   67633 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.key
	I0127 11:39:21.311862   67633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.crt with IP's: []
	I0127 11:39:21.574984   67633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.crt ...
	I0127 11:39:21.575000   67633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.crt: {Name:mk9f722b363546afb315fa551085b9e81ea52d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:21.575133   67633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.key ...
	I0127 11:39:21.575139   67633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.key: {Name:mkc1642908c0118f7b0eaacbdd2803040a47f20d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:21.575291   67633 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:39:21.575320   67633 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:39:21.575326   67633 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:39:21.575351   67633 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:39:21.575368   67633 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:39:21.575384   67633 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:39:21.575414   67633 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:39:21.576044   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:39:21.609951   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:39:21.642095   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:39:21.674989   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:39:21.808540   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:39:21.861398   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:39:21.892698   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:39:21.924259   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/cert-expiration-091274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:39:21.954868   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:39:21.988321   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:39:22.024623   67633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:39:22.052494   67633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:39:22.073044   67633 ssh_runner.go:195] Run: openssl version
	I0127 11:39:22.078631   67633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:39:22.092068   67633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:39:20.564154   66811 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:39:20.574509   66811 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:39:20.596963   66811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:39:20.597043   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:20.597066   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-273200 minikube.k8s.io/updated_at=2025_01_27T11_39_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-273200 minikube.k8s.io/primary=true
	I0127 11:39:20.617224   66811 ops.go:34] apiserver oom_adj: -16
	I0127 11:39:20.736251   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:21.236830   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:21.736671   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:22.236492   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:22.736915   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:23.237066   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:23.736817   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:24.237149   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:24.737267   66811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:39:24.826083   66811 kubeadm.go:1113] duration metric: took 4.229106841s to wait for elevateKubeSystemPrivileges
	I0127 11:39:24.826149   66811 kubeadm.go:394] duration metric: took 13.9306195s to StartCluster
	I0127 11:39:24.826187   66811 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:24.826276   66811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:39:24.828040   66811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:39:24.828306   66811 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:39:24.828358   66811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:39:24.828413   66811 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:39:24.828516   66811 addons.go:69] Setting storage-provisioner=true in profile "no-preload-273200"
	I0127 11:39:24.828534   66811 addons.go:69] Setting default-storageclass=true in profile "no-preload-273200"
	I0127 11:39:24.828551   66811 addons.go:238] Setting addon storage-provisioner=true in "no-preload-273200"
	I0127 11:39:24.828601   66811 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:39:24.828552   66811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-273200"
	I0127 11:39:24.828556   66811 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:39:24.829062   66811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:24.829068   66811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:24.829087   66811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:24.829093   66811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:24.829857   66811 out.go:177] * Verifying Kubernetes components...
	I0127 11:39:24.831285   66811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:39:24.847337   66811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0127 11:39:24.848630   66811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0127 11:39:24.848662   66811 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:24.849038   66811 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:24.849229   66811 main.go:141] libmachine: Using API Version  1
	I0127 11:39:24.849248   66811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:24.849561   66811 main.go:141] libmachine: Using API Version  1
	I0127 11:39:24.849585   66811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:24.849616   66811 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:24.850101   66811 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:24.850243   66811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:24.850286   66811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:24.850295   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:39:24.854176   66811 addons.go:238] Setting addon default-storageclass=true in "no-preload-273200"
	I0127 11:39:24.854218   66811 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:39:24.854601   66811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:24.854629   66811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:24.867823   66811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I0127 11:39:24.868359   66811 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:24.868934   66811 main.go:141] libmachine: Using API Version  1
	I0127 11:39:24.868960   66811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:24.869655   66811 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:24.869869   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:39:24.871790   66811 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:39:24.873645   66811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:39:24.875035   66811 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:39:24.875053   66811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:39:24.875071   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:39:24.875519   66811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0127 11:39:24.876252   66811 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:24.876855   66811 main.go:141] libmachine: Using API Version  1
	I0127 11:39:24.876872   66811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:24.877197   66811 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:24.877884   66811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:39:24.877922   66811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:39:24.878517   66811 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:39:24.878901   66811 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:43 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:39:24.878916   66811 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:39:24.879087   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:39:24.879209   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:39:24.879296   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:39:24.879405   66811 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:39:24.898179   66811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0127 11:39:24.898656   66811 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:39:24.899174   66811 main.go:141] libmachine: Using API Version  1
	I0127 11:39:24.899194   66811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:39:24.899566   66811 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:39:24.899798   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:39:24.901754   66811 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:39:24.901958   66811 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:39:24.901972   66811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:39:24.901987   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:39:24.905245   66811 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:39:24.905684   66811 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:43 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:39:24.905710   66811 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:39:24.906021   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:39:24.906181   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:39:24.906368   66811 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:39:24.906525   66811 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:39:25.037008   66811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:39:25.065517   66811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:39:25.208183   66811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:39:25.222590   66811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:39:25.662540   66811 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0127 11:39:25.664048   66811 node_ready.go:35] waiting up to 6m0s for node "no-preload-273200" to be "Ready" ...
	I0127 11:39:25.674196   66811 node_ready.go:49] node "no-preload-273200" has status "Ready":"True"
	I0127 11:39:25.674220   66811 node_ready.go:38] duration metric: took 10.133397ms for node "no-preload-273200" to be "Ready" ...
	I0127 11:39:25.674231   66811 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:39:25.691753   66811 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7d4t6" in "kube-system" namespace to be "Ready" ...
	I0127 11:39:26.174184   66811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-273200" context rescaled to 1 replicas
	I0127 11:39:26.300318   66811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.092089572s)
	I0127 11:39:26.300367   66811 main.go:141] libmachine: Making call to close driver server
	I0127 11:39:26.300380   66811 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:39:26.300717   66811 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:39:26.300769   66811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.07814077s)
	I0127 11:39:26.300798   66811 main.go:141] libmachine: Making call to close driver server
	I0127 11:39:26.300833   66811 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:39:26.302422   66811 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:39:26.302455   66811 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:39:26.302470   66811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:39:26.302470   66811 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:39:26.302480   66811 main.go:141] libmachine: Making call to close driver server
	I0127 11:39:26.302485   66811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:39:26.302489   66811 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:39:26.302496   66811 main.go:141] libmachine: Making call to close driver server
	I0127 11:39:26.302504   66811 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:39:26.304964   66811 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:39:26.304965   66811 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:39:26.304970   66811 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:39:26.304992   66811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:39:26.305014   66811 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:39:26.305033   66811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:39:26.336370   66811 main.go:141] libmachine: Making call to close driver server
	I0127 11:39:26.336401   66811 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:39:26.336739   66811 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:39:26.336792   66811 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:39:26.336802   66811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:39:26.338995   66811 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:39:22.097052   67633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:39:22.097094   67633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:39:22.103399   67633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:39:22.115118   67633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:39:22.126962   67633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:39:22.132413   67633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:39:22.132462   67633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:39:22.137905   67633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:39:22.149024   67633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:39:22.160169   67633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:39:22.165426   67633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:39:22.165467   67633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:39:22.170930   67633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:39:22.183428   67633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:39:22.188618   67633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:39:22.198724   67633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:39:22.204084   67633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:39:22.213219   67633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:39:22.219668   67633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:39:22.226821   67633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:39:22.242172   67633 kubeadm.go:392] StartCluster: {Name:cert-expiration-091274 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-091274 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:39:22.242226   67633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:39:22.242275   67633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:39:22.311143   67633 cri.go:89] found id: "c4835da38bc4aa40a6701ff3330e6a9ff0b54e319a366490f9403addd36a7042"
	I0127 11:39:22.311157   67633 cri.go:89] found id: "8b82b60d33bbf6b0e8dc185ab602a7f49f90dbef167507735fc692b43e3351b5"
	I0127 11:39:22.311162   67633 cri.go:89] found id: "ecda6e62aca1b65055600ff1c647f1a3fa87d030ba308a6ba86242035ed38c1a"
	I0127 11:39:22.311165   67633 cri.go:89] found id: "32774f2affbb4e505110898d291990a6e62a7fbd9374eca6499efdf68ed50dd8"
	I0127 11:39:22.311168   67633 cri.go:89] found id: "dba54dc06ca6b27fc478625d0d757cb70f3e2bc4f67acf951bd5c69ac617532b"
	I0127 11:39:22.311170   67633 cri.go:89] found id: "7ec7e9b1b38d66982585128ef51ff37feae0943b21c5feff4ea44d7d484ee832"
	I0127 11:39:22.311173   67633 cri.go:89] found id: "847bd91dad5b99840f85e6939eabe4e4a9ecf030e6a5eb7923921339f18f0065"
	I0127 11:39:22.311175   67633 cri.go:89] found id: "deb2b277d592fbb534395db521655c3057c630a6a203d9e665326bb5f76a7729"
	I0127 11:39:22.311178   67633 cri.go:89] found id: "96f33b2b2754a7ffd9edd7b31300b270d8bee8c22ea2b20735ad65367a77eb1a"
	I0127 11:39:22.311184   67633 cri.go:89] found id: "a97535ec1730611946444b08378edcda69a85d8ce2fbcaab223dd0a861f2da7e"
	I0127 11:39:22.311187   67633 cri.go:89] found id: "c6d790c12009eba4cae2e6accc4dec8248334e1dc98658bb7093177bf07b29b6"
	I0127 11:39:22.311190   67633 cri.go:89] found id: "686ac8ac01f0f9bb3bd6597d7c1fd3a8996dc1e80bc7ee3e98ad2e5e487e70a2"
	I0127 11:39:22.311193   67633 cri.go:89] found id: "a60751d7600282a02cfd2b73d8ab0fc1e59c15c51851b17eb1403cc235d83c89"
	I0127 11:39:22.311196   67633 cri.go:89] found id: ""
	I0127 11:39:22.311248   67633 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-480798 -n kubernetes-upgrade-480798
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-480798 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-480798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-480798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-480798: (1.154063903s)
--- FAIL: TestKubernetesUpgrade (403.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-900843 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-900843 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.116044372s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-900843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-900843" primary control-plane node in "pause-900843" cluster
	* Updating the running kvm2 "pause-900843" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-900843" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:33:13.958769   60608 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:33:13.958858   60608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:33:13.958863   60608 out.go:358] Setting ErrFile to fd 2...
	I0127 11:33:13.958868   60608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:33:13.959027   60608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:33:13.959528   60608 out.go:352] Setting JSON to false
	I0127 11:33:13.960469   60608 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8094,"bootTime":1737969500,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:33:13.960561   60608 start.go:139] virtualization: kvm guest
	I0127 11:33:13.963038   60608 out.go:177] * [pause-900843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:33:13.964330   60608 notify.go:220] Checking for updates...
	I0127 11:33:13.964353   60608 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:33:13.965533   60608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:33:13.966779   60608 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:33:13.967919   60608 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:33:13.969312   60608 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:33:13.970470   60608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:33:13.971896   60608 config.go:182] Loaded profile config "pause-900843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:33:13.972276   60608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:13.972344   60608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:13.988011   60608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0127 11:33:13.988387   60608 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:13.988990   60608 main.go:141] libmachine: Using API Version  1
	I0127 11:33:13.989017   60608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:13.989386   60608 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:13.989595   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:13.989868   60608 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:33:13.990270   60608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:13.990311   60608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:14.005142   60608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40151
	I0127 11:33:14.005624   60608 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:14.006131   60608 main.go:141] libmachine: Using API Version  1
	I0127 11:33:14.006161   60608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:14.006545   60608 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:14.006697   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:14.044953   60608 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:33:14.046322   60608 start.go:297] selected driver: kvm2
	I0127 11:33:14.046349   60608 start.go:901] validating driver "kvm2" against &{Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-poli
cy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:14.046500   60608 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:33:14.046944   60608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:33:14.047037   60608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:33:14.063257   60608 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:33:14.064057   60608 cni.go:84] Creating CNI manager for ""
	I0127 11:33:14.064099   60608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:14.064145   60608 start.go:340] cluster config:
	{Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:
false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:14.064255   60608 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:33:14.065974   60608 out.go:177] * Starting "pause-900843" primary control-plane node in "pause-900843" cluster
	I0127 11:33:14.067115   60608 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:33:14.067159   60608 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:33:14.067169   60608 cache.go:56] Caching tarball of preloaded images
	I0127 11:33:14.067260   60608 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:33:14.067276   60608 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 11:33:14.067381   60608 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/config.json ...
	I0127 11:33:14.067566   60608 start.go:360] acquireMachinesLock for pause-900843: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:33:22.743923   60608 start.go:364] duration metric: took 8.676288284s to acquireMachinesLock for "pause-900843"
	I0127 11:33:22.744031   60608 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:33:22.744040   60608 fix.go:54] fixHost starting: 
	I0127 11:33:22.744394   60608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:22.744437   60608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:22.760146   60608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0127 11:33:22.760580   60608 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:22.761071   60608 main.go:141] libmachine: Using API Version  1
	I0127 11:33:22.761096   60608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:22.761461   60608 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:22.761629   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:22.761801   60608 main.go:141] libmachine: (pause-900843) Calling .GetState
	I0127 11:33:22.763356   60608 fix.go:112] recreateIfNeeded on pause-900843: state=Running err=<nil>
	W0127 11:33:22.763382   60608 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:33:22.765140   60608 out.go:177] * Updating the running kvm2 "pause-900843" VM ...
	I0127 11:33:22.766414   60608 machine.go:93] provisionDockerMachine start ...
	I0127 11:33:22.766431   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:22.766600   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:22.769066   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:22.769510   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:22.769545   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:22.769699   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:22.769859   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:22.769989   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:22.770109   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:22.770257   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:22.770441   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:22.770453   60608 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:33:22.884063   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-900843
	
	I0127 11:33:22.884096   60608 main.go:141] libmachine: (pause-900843) Calling .GetMachineName
	I0127 11:33:22.884298   60608 buildroot.go:166] provisioning hostname "pause-900843"
	I0127 11:33:22.884326   60608 main.go:141] libmachine: (pause-900843) Calling .GetMachineName
	I0127 11:33:22.884514   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:22.887206   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:22.887549   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:22.887576   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:22.887719   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:22.887863   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:22.888008   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:22.888166   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:22.888324   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:22.888533   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:22.888552   60608 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-900843 && echo "pause-900843" | sudo tee /etc/hostname
	I0127 11:33:23.020454   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-900843
	
	I0127 11:33:23.020490   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:23.023937   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.024400   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:23.024442   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.024795   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:23.025010   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:23.025197   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:23.025351   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:23.025528   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:23.025743   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:23.025760   60608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-900843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-900843/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-900843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:33:23.140721   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:33:23.140748   60608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:33:23.140771   60608 buildroot.go:174] setting up certificates
	I0127 11:33:23.140781   60608 provision.go:84] configureAuth start
	I0127 11:33:23.140793   60608 main.go:141] libmachine: (pause-900843) Calling .GetMachineName
	I0127 11:33:23.141072   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:23.143996   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.144359   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:23.144386   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.144649   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:23.147164   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.147525   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:23.147566   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.147744   60608 provision.go:143] copyHostCerts
	I0127 11:33:23.147827   60608 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:33:23.147847   60608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:33:23.147922   60608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:33:23.148034   60608 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:33:23.148048   60608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:33:23.148081   60608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:33:23.148160   60608 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:33:23.148171   60608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:33:23.148192   60608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:33:23.148243   60608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.pause-900843 san=[127.0.0.1 192.168.50.246 localhost minikube pause-900843]
	I0127 11:33:23.415998   60608 provision.go:177] copyRemoteCerts
	I0127 11:33:23.416065   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:33:23.416089   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:23.419202   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.419541   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:23.419588   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.419782   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:23.419991   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:23.420207   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:23.420360   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:23.516670   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:33:23.552602   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:33:23.581870   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:33:23.619752   60608 provision.go:87] duration metric: took 478.956452ms to configureAuth
	I0127 11:33:23.619790   60608 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:33:23.620049   60608 config.go:182] Loaded profile config "pause-900843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:33:23.620163   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:23.623587   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.624086   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:23.624134   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:23.624300   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:23.624507   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:23.624684   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:23.624885   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:23.625083   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:23.625257   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:23.625278   60608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:33:29.144459   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:33:29.144490   60608 machine.go:96] duration metric: took 6.378062966s to provisionDockerMachine
	I0127 11:33:29.144505   60608 start.go:293] postStartSetup for "pause-900843" (driver="kvm2")
	I0127 11:33:29.144518   60608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:33:29.144539   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.144843   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:33:29.144869   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.147518   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.147857   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.147896   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.148013   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.148187   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.148342   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.148441   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.233611   60608 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:33:29.237809   60608 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:33:29.237835   60608 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:33:29.237892   60608 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:33:29.237987   60608 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:33:29.238115   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:33:29.247346   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:29.274787   60608 start.go:296] duration metric: took 130.266105ms for postStartSetup
	I0127 11:33:29.274839   60608 fix.go:56] duration metric: took 6.530798739s for fixHost
	I0127 11:33:29.274862   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.278195   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.278722   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.278758   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.278980   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.279162   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.279341   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.279491   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.279663   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.279831   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:29.279844   60608 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:33:29.396114   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977609.374897195
	
	I0127 11:33:29.396137   60608 fix.go:216] guest clock: 1737977609.374897195
	I0127 11:33:29.396151   60608 fix.go:229] Guest: 2025-01-27 11:33:29.374897195 +0000 UTC Remote: 2025-01-27 11:33:29.274843307 +0000 UTC m=+15.354336795 (delta=100.053888ms)
	I0127 11:33:29.396176   60608 fix.go:200] guest clock delta is within tolerance: 100.053888ms
	I0127 11:33:29.396183   60608 start.go:83] releasing machines lock for "pause-900843", held for 6.652178072s
	I0127 11:33:29.396207   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.396466   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:29.399408   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.399799   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.399826   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.400009   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400559   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400751   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400874   60608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:33:29.400923   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.400955   60608 ssh_runner.go:195] Run: cat /version.json
	I0127 11:33:29.400977   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.403665   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.403998   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.404026   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404049   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404241   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.404423   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.404481   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.404512   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404710   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.404710   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.404854   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.404902   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.405033   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.405143   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.513057   60608 ssh_runner.go:195] Run: systemctl --version
	I0127 11:33:29.518966   60608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:33:29.677793   60608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:33:29.687702   60608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:33:29.687796   60608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:33:29.700107   60608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 11:33:29.700132   60608 start.go:495] detecting cgroup driver to use...
	I0127 11:33:29.700206   60608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:33:29.720239   60608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:33:29.736759   60608 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:33:29.736864   60608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:33:29.751575   60608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:33:29.766382   60608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:33:29.929606   60608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:33:30.068031   60608 docker.go:233] disabling docker service ...
	I0127 11:33:30.068092   60608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:33:30.090234   60608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:33:30.104920   60608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:33:30.267073   60608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:33:30.410099   60608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:33:30.426262   60608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:33:30.448794   60608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:33:30.448851   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.461453   60608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:33:30.461514   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.476685   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.488442   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.498740   60608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:33:30.512774   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.526619   60608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.539865   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.551635   60608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:33:30.562250   60608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:33:30.572647   60608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:30.724876   60608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:33:31.947292   60608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.222375748s)
	I0127 11:33:31.947338   60608 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:33:31.947399   60608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:33:31.964628   60608 start.go:563] Will wait 60s for crictl version
	I0127 11:33:31.964697   60608 ssh_runner.go:195] Run: which crictl
	I0127 11:33:31.971037   60608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:33:32.104049   60608 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:33:32.104148   60608 ssh_runner.go:195] Run: crio --version
	I0127 11:33:32.356574   60608 ssh_runner.go:195] Run: crio --version
	I0127 11:33:32.594082   60608 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:33:32.595350   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:32.598873   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:32.599218   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:32.599253   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:32.599510   60608 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:32.624423   60608 kubeadm.go:883] updating cluster {Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:33:32.624616   60608 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:33:32.624686   60608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:32.763015   60608 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:33:32.763047   60608 crio.go:433] Images already preloaded, skipping extraction
	I0127 11:33:32.763107   60608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:32.879425   60608 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:33:32.879449   60608 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:33:32.879459   60608 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.32.1 crio true true} ...
	I0127 11:33:32.879577   60608 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-900843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:33:32.879673   60608 ssh_runner.go:195] Run: crio config
	I0127 11:33:33.004770   60608 cni.go:84] Creating CNI manager for ""
	I0127 11:33:33.004798   60608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:33.004812   60608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:33:33.004845   60608 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-900843 NodeName:pause-900843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:33:33.005056   60608 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-900843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:33:33.005120   60608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:33:33.053115   60608 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:33:33.053188   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:33:33.063741   60608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 11:33:33.082239   60608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:33:33.155905   60608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 11:33:33.190014   60608 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0127 11:33:33.194134   60608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:33.428935   60608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:33:33.463354   60608 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843 for IP: 192.168.50.246
	I0127 11:33:33.463375   60608 certs.go:194] generating shared ca certs ...
	I0127 11:33:33.463394   60608 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:33.463564   60608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:33:33.463652   60608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:33:33.463669   60608 certs.go:256] generating profile certs ...
	I0127 11:33:33.463840   60608 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/client.key
	I0127 11:33:33.463939   60608 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.key.ff28fce8
	I0127 11:33:33.463981   60608 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.key
	I0127 11:33:33.464081   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:33:33.464119   60608 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:33:33.464129   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:33:33.464162   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:33:33.464195   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:33:33.464226   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:33:33.464280   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:33.465040   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:33:33.500924   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:33:33.535736   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:33:33.568188   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:33:33.601451   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:33:33.626636   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:33:33.650258   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:33:33.698501   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:33:33.726687   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:33:33.756031   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:33:33.781042   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:33:33.807550   60608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:33:33.833954   60608 ssh_runner.go:195] Run: openssl version
	I0127 11:33:33.840318   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:33:33.856131   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.860824   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.860917   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.867171   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:33:33.879120   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:33:33.890622   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.895292   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.895350   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.900938   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:33:33.910937   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:33:33.922304   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.927290   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.927347   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.933682   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:33:33.947503   60608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:33:33.953994   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:33:33.960197   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:33:33.967566   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:33:33.974130   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:33:33.980629   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:33:33.986481   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:33:33.992283   60608 kubeadm.go:392] StartCluster: {Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:33.992430   60608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:33:33.992507   60608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:33:34.077457   60608 cri.go:89] found id: "19636b08b7622d750f5054d7b2d51fec4669f6c31448987e876b47e95d1eb0fb"
	I0127 11:33:34.077483   60608 cri.go:89] found id: "745d2dc203d2a6a3963920aa88049cf50d6ad6cb186e9782f5e2afe41c3ff84b"
	I0127 11:33:34.077490   60608 cri.go:89] found id: "2ffcce7727f3745431b7444cf89fc00c0ee7497937665bc92d12a40377390157"
	I0127 11:33:34.077495   60608 cri.go:89] found id: "347e50c706723add6c69c1bfeb19290636137a1e8765b41976cda1b16ed4076b"
	I0127 11:33:34.077500   60608 cri.go:89] found id: "13eaf4245fb539e865ee03fefd604b4b88fe4ff8af14b5c168acea7eb3f401be"
	I0127 11:33:34.077505   60608 cri.go:89] found id: "20e60f86899ccc2f414ff0642e113f31da0728a7b8375834767fbecc9be0c358"
	I0127 11:33:34.077509   60608 cri.go:89] found id: "1383f1d93fdba8af8ad1360ce250b50c269ebbf4b6c6fa1895494ae9968dadcb"
	I0127 11:33:34.077513   60608 cri.go:89] found id: "f72a1bdee26af527c97f25b5afd7ef636cba09b54fb369ae2a88f66006e1eb76"
	I0127 11:33:34.077517   60608 cri.go:89] found id: "0574c0c89c037a6f4a9e6f77dd5a5fb3dbb4526bb496e0e10a98db0cabdc5aae"
	I0127 11:33:34.077526   60608 cri.go:89] found id: "506354c5ff5e7ac4c31a161e4a512782957781f5a355d36b8b16aa8011149b3b"
	I0127 11:33:34.077531   60608 cri.go:89] found id: "1064451fc9f3850cee5e45dbbd6baea628acfab95608e700033ce004d3377c44"
	I0127 11:33:34.077534   60608 cri.go:89] found id: "5cb36e64d3ac093b2b4031fdd0eeedbf3b409bea7fc791055f42729930ad4409"
	I0127 11:33:34.077539   60608 cri.go:89] found id: ""
	I0127 11:33:34.077585   60608 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-900843 -n pause-900843
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-900843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-900843 logs -n 25: (1.410526786s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p test-preload-858946         | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:28 UTC |
	| start   | -p test-preload-858946         | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:28 UTC | 27 Jan 25 11:29 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| image   | test-preload-858946 image list | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:29 UTC |
	| delete  | -p test-preload-858946         | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:29 UTC |
	| start   | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:30 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC | 27 Jan 25 11:30 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC | 27 Jan 25 11:31 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:31 UTC |
	| start   | -p pause-900843 --memory=2048  | pause-900843              | jenkins | v1.35.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:33 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-880670         | offline-crio-880670       | jenkins | v1.35.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:32 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-943115      | minikube                  | jenkins | v1.26.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:33 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p running-upgrade-968925      | minikube                  | jenkins | v1.26.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:33 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-880670         | offline-crio-880670       | jenkins | v1.35.0 | 27 Jan 25 11:32 UTC | 27 Jan 25 11:32 UTC |
	| start   | -p kubernetes-upgrade-480798   | kubernetes-upgrade-480798 | jenkins | v1.35.0 | 27 Jan 25 11:32 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-943115 stop    | minikube                  | jenkins | v1.26.0 | 27 Jan 25 11:33 UTC |                     |
	| start   | -p pause-900843                | pause-900843              | jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:34 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-968925      | running-upgrade-968925    | jenkins | v1.35.0 | 27 Jan 25 11:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:33:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:33:25.898228   60753 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:33:25.898509   60753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:33:25.898519   60753 out.go:358] Setting ErrFile to fd 2...
	I0127 11:33:25.898525   60753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:33:25.898734   60753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:33:25.899369   60753 out.go:352] Setting JSON to false
	I0127 11:33:25.900354   60753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8106,"bootTime":1737969500,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:33:25.900458   60753 start.go:139] virtualization: kvm guest
	I0127 11:33:25.902778   60753 out.go:177] * [running-upgrade-968925] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:33:25.904073   60753 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:33:25.904076   60753 notify.go:220] Checking for updates...
	I0127 11:33:25.905356   60753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:33:25.906600   60753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:33:25.907856   60753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:33:25.909215   60753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:33:25.910484   60753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:33:25.912170   60753 config.go:182] Loaded profile config "running-upgrade-968925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:33:25.912663   60753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:25.912725   60753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:25.928304   60753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
	I0127 11:33:25.928812   60753 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:25.929395   60753 main.go:141] libmachine: Using API Version  1
	I0127 11:33:25.929414   60753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:25.930004   60753 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:25.930203   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:25.932148   60753 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:33:25.933533   60753 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:33:25.934072   60753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:25.934120   60753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:25.951511   60753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0127 11:33:25.952029   60753 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:25.952519   60753 main.go:141] libmachine: Using API Version  1
	I0127 11:33:25.952542   60753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:25.952906   60753 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:25.953120   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:25.987581   60753 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:33:25.989063   60753 start.go:297] selected driver: kvm2
	I0127 11:33:25.989095   60753 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-968925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-968
925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 11:33:25.989243   60753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:33:25.990374   60753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:33:25.990494   60753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:33:26.010130   60753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:33:26.010658   60753 cni.go:84] Creating CNI manager for ""
	I0127 11:33:26.010727   60753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:26.010810   60753 start.go:340] cluster config:
	{Name:running-upgrade-968925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-968925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 11:33:26.010959   60753 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:33:26.012968   60753 out.go:177] * Starting "running-upgrade-968925" primary control-plane node in "running-upgrade-968925" cluster
	I0127 11:33:24.166712   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetIP
	I0127 11:33:24.169394   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:24.169753   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:24.169776   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:24.169978   60315 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:24.173899   60315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:33:24.185980   60315 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:33:24.186105   60315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:33:24.186163   60315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:24.217311   60315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:33:24.217389   60315 ssh_runner.go:195] Run: which lz4
	I0127 11:33:24.221400   60315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:33:24.225509   60315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:33:24.225538   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:33:25.723293   60315 crio.go:462] duration metric: took 1.501912534s to copy over tarball
	I0127 11:33:25.723373   60315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:33:26.014112   60753 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0127 11:33:26.014160   60753 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:33:26.014174   60753 cache.go:56] Caching tarball of preloaded images
	I0127 11:33:26.014285   60753 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:33:26.014305   60753 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0127 11:33:26.014439   60753 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/running-upgrade-968925/config.json ...
	I0127 11:33:26.014649   60753 start.go:360] acquireMachinesLock for running-upgrade-968925: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:33:29.396308   60753 start.go:364] duration metric: took 3.381595049s to acquireMachinesLock for "running-upgrade-968925"
	I0127 11:33:29.396382   60753 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:33:29.396393   60753 fix.go:54] fixHost starting: 
	I0127 11:33:29.396838   60753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:29.396897   60753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:29.415599   60753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0127 11:33:29.416140   60753 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:29.416674   60753 main.go:141] libmachine: Using API Version  1
	I0127 11:33:29.416699   60753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:29.417058   60753 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:29.417266   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:29.417417   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetState
	I0127 11:33:29.419226   60753 fix.go:112] recreateIfNeeded on running-upgrade-968925: state=Running err=<nil>
	W0127 11:33:29.419246   60753 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:33:29.533893   60753 out.go:177] * Updating the running kvm2 "running-upgrade-968925" VM ...
	I0127 11:33:29.651403   60753 machine.go:93] provisionDockerMachine start ...
	I0127 11:33:29.651457   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:29.651793   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:29.654864   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.655307   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:29.655353   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.655553   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:29.655766   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.655932   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.656072   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:29.656229   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.656493   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:29.656507   60753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:33:29.776045   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-968925
	
	I0127 11:33:29.776075   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetMachineName
	I0127 11:33:29.776310   60753 buildroot.go:166] provisioning hostname "running-upgrade-968925"
	I0127 11:33:29.776337   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetMachineName
	I0127 11:33:29.776540   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:29.779769   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.780260   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:29.780288   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.780417   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:29.780596   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.780792   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.781001   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:29.781196   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.781414   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:29.781434   60753 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-968925 && echo "running-upgrade-968925" | sudo tee /etc/hostname
	I0127 11:33:29.911865   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-968925
	
	I0127 11:33:29.911908   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:29.915233   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.915697   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:29.915727   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.916014   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:29.916219   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.916407   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.916610   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:29.916807   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.916991   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:29.917022   60753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-968925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-968925/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-968925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:33:30.032352   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:33:30.032383   60753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:33:30.032404   60753 buildroot.go:174] setting up certificates
	I0127 11:33:30.032418   60753 provision.go:84] configureAuth start
	I0127 11:33:30.032430   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetMachineName
	I0127 11:33:30.032743   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetIP
	I0127 11:33:30.035674   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.036003   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.036024   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.036190   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:30.038813   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.039238   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.039266   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.039390   60753 provision.go:143] copyHostCerts
	I0127 11:33:30.039461   60753 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:33:30.039472   60753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:33:30.039523   60753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:33:30.039655   60753 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:33:30.039666   60753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:33:30.039690   60753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:33:30.039781   60753 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:33:30.039791   60753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:33:30.039813   60753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:33:30.039887   60753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-968925 san=[127.0.0.1 192.168.72.228 localhost minikube running-upgrade-968925]
	I0127 11:33:30.313182   60753 provision.go:177] copyRemoteCerts
	I0127 11:33:30.313251   60753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:33:30.313277   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:30.316120   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.316435   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.316489   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.316958   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:30.317211   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:30.317430   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:30.317619   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	I0127 11:33:30.407238   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:33:30.433269   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:33:30.463153   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:33:30.584712   60753 provision.go:87] duration metric: took 552.281977ms to configureAuth
	I0127 11:33:30.584742   60753 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:33:30.584942   60753 config.go:182] Loaded profile config "running-upgrade-968925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:33:30.585069   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:30.588420   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.588856   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.588887   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.589090   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:30.589342   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:30.589502   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:30.589654   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:30.589828   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:30.590025   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:30.590044   60753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:33:28.192368   60315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.468964657s)
	I0127 11:33:28.192394   60315 crio.go:469] duration metric: took 2.469070397s to extract the tarball
	I0127 11:33:28.192404   60315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:33:28.233159   60315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:28.276108   60315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:33:28.276139   60315 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:33:28.276238   60315 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:28.276244   60315 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.276271   60315 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.276275   60315 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:33:28.276286   60315 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.276247   60315 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.276298   60315 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.276254   60315 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.277901   60315 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.277925   60315 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.277903   60315 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:28.277902   60315 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.277901   60315 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.277907   60315 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:33:28.277927   60315 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.277902   60315 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.427872   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.428061   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.434839   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.440570   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.457387   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.459239   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.500292   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:33:28.519394   60315 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:33:28.519450   60315 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.519466   60315 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:33:28.519501   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.519501   60315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.519631   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.524827   60315 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:33:28.524864   60315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.524907   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.572604   60315 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:33:28.572660   60315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.572701   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.594543   60315 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:33:28.594591   60315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.594640   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.594676   60315 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:33:28.594711   60315 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.594744   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.604978   60315 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:33:28.605007   60315 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:33:28.605028   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.605042   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.605103   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.605161   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.605178   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.605235   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.605280   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.725558   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.725597   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.725601   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.725707   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.725760   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.725793   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:28.725820   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.841925   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.862363   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.869074   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.869108   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.869120   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.869200   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.869288   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:28.933917   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:33:28.985479   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:33:28.998015   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:33:29.008764   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:33:29.008846   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:29.012783   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:33:29.012861   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:33:29.047571   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:33:29.222794   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:29.366916   60315 cache_images.go:92] duration metric: took 1.090751434s to LoadCachedImages
	W0127 11:33:29.367016   60315 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0127 11:33:29.367036   60315 kubeadm.go:934] updating node { 192.168.83.73 8443 v1.20.0 crio true true} ...
	I0127 11:33:29.367182   60315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-480798 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:33:29.367285   60315 ssh_runner.go:195] Run: crio config
	I0127 11:33:29.430210   60315 cni.go:84] Creating CNI manager for ""
	I0127 11:33:29.430230   60315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:29.430239   60315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:33:29.430257   60315 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.73 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-480798 NodeName:kubernetes-upgrade-480798 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:33:29.430387   60315 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-480798"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:33:29.430463   60315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:33:29.440428   60315 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:33:29.440483   60315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:33:29.450433   60315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0127 11:33:29.466059   60315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:33:29.480733   60315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:33:29.497078   60315 ssh_runner.go:195] Run: grep 192.168.83.73	control-plane.minikube.internal$ /etc/hosts
	I0127 11:33:29.500859   60315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:33:29.514576   60315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:29.643945   60315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:33:29.663067   60315 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798 for IP: 192.168.83.73
	I0127 11:33:29.663088   60315 certs.go:194] generating shared ca certs ...
	I0127 11:33:29.663106   60315 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.663261   60315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:33:29.663315   60315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:33:29.663336   60315 certs.go:256] generating profile certs ...
	I0127 11:33:29.663446   60315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key
	I0127 11:33:29.663471   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt with IP's: []
	I0127 11:33:29.800004   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt ...
	I0127 11:33:29.800038   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt: {Name:mkaa6ca211b0e39160992b60e71795f794b4fa57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.800243   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key ...
	I0127 11:33:29.800267   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key: {Name:mkba3526bbc1c913be01a6bc4ce4e3baf78ed28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.800412   60315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c
	I0127 11:33:29.800436   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.73]
	I0127 11:33:29.963202   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c ...
	I0127 11:33:29.963227   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c: {Name:mk647f7a7f5a0dabbc21fe291d29db85829b422f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.963364   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c ...
	I0127 11:33:29.963378   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c: {Name:mkbbe66814ffa44807139b1c6c8df1cbfe9d85f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.963443   60315 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt
	I0127 11:33:29.963520   60315 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key
	I0127 11:33:29.963577   60315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key
	I0127 11:33:29.963591   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt with IP's: []
	I0127 11:33:30.061333   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt ...
	I0127 11:33:30.061361   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt: {Name:mk13a4dcb74d04f521c59b139c0faacce5465377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:30.061519   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key ...
	I0127 11:33:30.061536   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key: {Name:mk309b6c9e6da261ab0aecbaa4e7871ee8cdd22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:30.061732   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:33:30.061781   60315 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:33:30.061794   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:33:30.061833   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:33:30.061869   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:33:30.061901   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:33:30.061956   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:30.062539   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:33:30.094530   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:33:30.121255   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:33:30.147323   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:33:30.175961   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 11:33:30.203496   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:33:30.228391   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:33:30.256448   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:33:30.282667   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:33:30.305716   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:33:30.334009   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:33:30.359897   60315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:33:30.379030   60315 ssh_runner.go:195] Run: openssl version
	I0127 11:33:30.386661   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:33:30.399285   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.404091   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.404156   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.411200   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:33:30.424652   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:33:30.439225   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.444544   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.444608   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.451131   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:33:30.465772   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:33:30.476581   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.481472   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.481535   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.487353   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:33:30.502830   60315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:33:30.508448   60315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:33:30.508513   60315 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:30.508611   60315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:33:30.508664   60315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:33:30.563952   60315 cri.go:89] found id: ""
	I0127 11:33:30.564015   60315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:33:30.580363   60315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:33:30.601987   60315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:33:30.620766   60315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:33:30.620788   60315 kubeadm.go:157] found existing configuration files:
	
	I0127 11:33:30.620841   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:33:30.634563   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:33:30.634639   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:33:30.645365   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:33:30.657896   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:33:30.657960   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:33:30.669588   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:33:30.679304   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:33:30.679367   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:33:30.688972   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:33:30.697895   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:33:30.697950   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:33:30.708950   60315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:33:30.840143   60315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:33:30.840245   60315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:33:30.968066   60315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:33:30.968191   60315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:33:30.968338   60315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:33:31.140896   60315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:33:29.144459   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:33:29.144490   60608 machine.go:96] duration metric: took 6.378062966s to provisionDockerMachine
	I0127 11:33:29.144505   60608 start.go:293] postStartSetup for "pause-900843" (driver="kvm2")
	I0127 11:33:29.144518   60608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:33:29.144539   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.144843   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:33:29.144869   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.147518   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.147857   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.147896   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.148013   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.148187   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.148342   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.148441   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.233611   60608 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:33:29.237809   60608 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:33:29.237835   60608 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:33:29.237892   60608 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:33:29.237987   60608 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:33:29.238115   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:33:29.247346   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:29.274787   60608 start.go:296] duration metric: took 130.266105ms for postStartSetup
	I0127 11:33:29.274839   60608 fix.go:56] duration metric: took 6.530798739s for fixHost
	I0127 11:33:29.274862   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.278195   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.278722   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.278758   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.278980   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.279162   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.279341   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.279491   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.279663   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.279831   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:29.279844   60608 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:33:29.396114   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977609.374897195
	
	I0127 11:33:29.396137   60608 fix.go:216] guest clock: 1737977609.374897195
	I0127 11:33:29.396151   60608 fix.go:229] Guest: 2025-01-27 11:33:29.374897195 +0000 UTC Remote: 2025-01-27 11:33:29.274843307 +0000 UTC m=+15.354336795 (delta=100.053888ms)
	I0127 11:33:29.396176   60608 fix.go:200] guest clock delta is within tolerance: 100.053888ms
	I0127 11:33:29.396183   60608 start.go:83] releasing machines lock for "pause-900843", held for 6.652178072s
	I0127 11:33:29.396207   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.396466   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:29.399408   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.399799   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.399826   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.400009   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400559   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400751   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400874   60608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:33:29.400923   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.400955   60608 ssh_runner.go:195] Run: cat /version.json
	I0127 11:33:29.400977   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.403665   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.403998   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.404026   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404049   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404241   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.404423   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.404481   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.404512   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404710   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.404710   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.404854   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.404902   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.405033   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.405143   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.513057   60608 ssh_runner.go:195] Run: systemctl --version
	I0127 11:33:29.518966   60608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:33:29.677793   60608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:33:29.687702   60608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:33:29.687796   60608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:33:29.700107   60608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 11:33:29.700132   60608 start.go:495] detecting cgroup driver to use...
	I0127 11:33:29.700206   60608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:33:29.720239   60608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:33:29.736759   60608 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:33:29.736864   60608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:33:29.751575   60608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:33:29.766382   60608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:33:29.929606   60608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:33:30.068031   60608 docker.go:233] disabling docker service ...
	I0127 11:33:30.068092   60608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:33:30.090234   60608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:33:30.104920   60608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:33:30.267073   60608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:33:30.410099   60608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:33:30.426262   60608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:33:30.448794   60608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:33:30.448851   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.461453   60608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:33:30.461514   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.476685   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.488442   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.498740   60608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:33:30.512774   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.526619   60608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.539865   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.551635   60608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:33:30.562250   60608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:33:30.572647   60608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:30.724876   60608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:33:31.947292   60608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.222375748s)
	I0127 11:33:31.947338   60608 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:33:31.947399   60608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:33:31.964628   60608 start.go:563] Will wait 60s for crictl version
	I0127 11:33:31.964697   60608 ssh_runner.go:195] Run: which crictl
	I0127 11:33:31.971037   60608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:33:32.104049   60608 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:33:32.104148   60608 ssh_runner.go:195] Run: crio --version
	I0127 11:33:32.356574   60608 ssh_runner.go:195] Run: crio --version
	I0127 11:33:32.594082   60608 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:33:31.293692   60315 out.go:235]   - Generating certificates and keys ...
	I0127 11:33:31.293813   60315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:33:31.293920   60315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:33:31.294024   60315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:33:31.694694   60315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:33:31.821080   60315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:33:32.143166   60315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:33:32.197137   60315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:33:32.197479   60315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	I0127 11:33:32.425895   60315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:33:32.426224   60315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	I0127 11:33:32.589528   60315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:33:32.778137   60315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:33:32.595350   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:32.598873   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:32.599218   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:32.599253   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:32.599510   60608 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:32.624423   60608 kubeadm.go:883] updating cluster {Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:33:32.624616   60608 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:33:32.624686   60608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:32.763015   60608 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:33:32.763047   60608 crio.go:433] Images already preloaded, skipping extraction
	I0127 11:33:32.763107   60608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:32.879425   60608 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:33:32.879449   60608 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:33:32.879459   60608 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.32.1 crio true true} ...
	I0127 11:33:32.879577   60608 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-900843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:33:32.879673   60608 ssh_runner.go:195] Run: crio config
	I0127 11:33:33.004770   60608 cni.go:84] Creating CNI manager for ""
	I0127 11:33:33.004798   60608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:33.004812   60608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:33:33.004845   60608 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-900843 NodeName:pause-900843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:33:33.005056   60608 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-900843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:33:33.005120   60608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:33:33.053115   60608 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:33:33.053188   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:33:33.063741   60608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 11:33:33.082239   60608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:33:33.155905   60608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 11:33:33.190014   60608 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0127 11:33:33.194134   60608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:33.428935   60608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:33:33.463354   60608 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843 for IP: 192.168.50.246
	I0127 11:33:33.463375   60608 certs.go:194] generating shared ca certs ...
	I0127 11:33:33.463394   60608 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:33.463564   60608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:33:33.463652   60608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:33:33.463669   60608 certs.go:256] generating profile certs ...
	I0127 11:33:33.463840   60608 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/client.key
	I0127 11:33:33.463939   60608 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.key.ff28fce8
	I0127 11:33:33.463981   60608 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.key
	I0127 11:33:33.464081   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:33:33.464119   60608 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:33:33.464129   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:33:33.464162   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:33:33.464195   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:33:33.464226   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:33:33.464280   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:33.465040   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:33:33.500924   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:33:33.535736   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:33:33.568188   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:33:33.601451   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:33:33.626636   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:33:33.650258   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:33:33.698501   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:33:33.726687   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:33:33.756031   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:33:33.781042   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:33:33.807550   60608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:33:33.833954   60608 ssh_runner.go:195] Run: openssl version
	I0127 11:33:33.840318   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:33:33.856131   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.860824   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.860917   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.867171   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:33:33.879120   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:33:33.890622   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.895292   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.895350   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.900938   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:33:33.910937   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:33:33.922304   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.927290   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.927347   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.933682   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:33:33.947503   60608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:33:33.953994   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:33:33.160573   60315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:33:33.160670   60315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:33:33.224218   60315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:33:33.788353   60315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:33:33.899841   60315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:33:33.976565   60315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:33:33.993549   60315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:33:33.994045   60315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:33:33.994107   60315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:33:34.115038   60315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:33:32.022481   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:33:32.022525   60753 machine.go:96] duration metric: took 2.3710731s to provisionDockerMachine
	I0127 11:33:32.022540   60753 start.go:293] postStartSetup for "running-upgrade-968925" (driver="kvm2")
	I0127 11:33:32.022554   60753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:33:32.022576   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.022884   60753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:33:32.022923   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.025998   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.026371   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.026413   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.026690   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.026910   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.027109   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.027238   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	I0127 11:33:32.115577   60753 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:33:32.120297   60753 info.go:137] Remote host: Buildroot 2021.02.12
	I0127 11:33:32.120324   60753 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:33:32.120403   60753 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:33:32.120509   60753 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:33:32.120631   60753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:33:32.129361   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:32.157413   60753 start.go:296] duration metric: took 134.852355ms for postStartSetup
	I0127 11:33:32.157464   60753 fix.go:56] duration metric: took 2.7610705s for fixHost
	I0127 11:33:32.157503   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.160332   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.160757   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.160789   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.161055   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.161278   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.161437   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.161590   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.161742   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:32.161938   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:32.161951   60753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:33:32.285000   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977612.263526312
	
	I0127 11:33:32.285025   60753 fix.go:216] guest clock: 1737977612.263526312
	I0127 11:33:32.285033   60753 fix.go:229] Guest: 2025-01-27 11:33:32.263526312 +0000 UTC Remote: 2025-01-27 11:33:32.157469968 +0000 UTC m=+6.307211810 (delta=106.056344ms)
	I0127 11:33:32.285061   60753 fix.go:200] guest clock delta is within tolerance: 106.056344ms
	I0127 11:33:32.285071   60753 start.go:83] releasing machines lock for "running-upgrade-968925", held for 2.888710729s
	I0127 11:33:32.285094   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.285357   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetIP
	I0127 11:33:32.288444   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.288934   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.288962   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.289160   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.289687   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.289854   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.289934   60753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:33:32.289983   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.290256   60753 ssh_runner.go:195] Run: cat /version.json
	I0127 11:33:32.290277   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.293060   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.293422   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.293463   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.293594   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.293643   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.293787   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.293963   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.294064   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	I0127 11:33:32.294342   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.294373   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.294419   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.294567   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.294709   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.294828   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	W0127 11:33:32.406391   60753 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0127 11:33:32.406481   60753 ssh_runner.go:195] Run: systemctl --version
	I0127 11:33:32.411803   60753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:33:32.607162   60753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:33:32.614146   60753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:33:32.614209   60753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:33:32.629721   60753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:33:32.629739   60753 start.go:495] detecting cgroup driver to use...
	I0127 11:33:32.629804   60753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:33:32.649185   60753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:33:32.668326   60753 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:33:32.668395   60753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:33:32.685821   60753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:33:32.698804   60753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:33:32.856379   60753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:33:33.013882   60753 docker.go:233] disabling docker service ...
	I0127 11:33:33.013959   60753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:33:33.026052   60753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:33:33.038472   60753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:33:33.181511   60753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:33:33.325886   60753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:33:33.341457   60753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:33:33.363109   60753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 11:33:33.363180   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.380863   60753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:33:33.380943   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.404474   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.413549   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.422753   60753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:33:33.435725   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.448044   60753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.464740   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.478143   60753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:33:33.488185   60753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:33:33.499059   60753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:33.741704   60753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:33:34.502655   60753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:33:34.502727   60753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:33:34.508164   60753 start.go:563] Will wait 60s for crictl version
	I0127 11:33:34.508225   60753 ssh_runner.go:195] Run: which crictl
	I0127 11:33:34.511843   60753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:33:34.550104   60753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0127 11:33:34.550183   60753 ssh_runner.go:195] Run: crio --version
	I0127 11:33:34.589506   60753 ssh_runner.go:195] Run: crio --version
	I0127 11:33:34.632821   60753 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0127 11:33:34.634185   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetIP
	I0127 11:33:34.637149   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:34.637529   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:34.637560   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:34.637846   60753 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:34.641492   60753 kubeadm.go:883] updating cluster {Name:running-upgrade-968925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-968925 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0127 11:33:34.641616   60753 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0127 11:33:34.641669   60753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:34.674593   60753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0127 11:33:34.674649   60753 ssh_runner.go:195] Run: which lz4
	I0127 11:33:34.683097   60753 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:33:34.687363   60753 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:33:34.687394   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0127 11:33:34.192855   60315 out.go:235]   - Booting up control plane ...
	I0127 11:33:34.193031   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:33:34.193145   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:33:34.193251   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:33:34.193399   60315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:33:34.193617   60315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:33:33.960197   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:33:33.967566   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:33:33.974130   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:33:33.980629   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:33:33.986481   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:33:33.992283   60608 kubeadm.go:392] StartCluster: {Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:33.992430   60608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:33:33.992507   60608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:33:34.077457   60608 cri.go:89] found id: "19636b08b7622d750f5054d7b2d51fec4669f6c31448987e876b47e95d1eb0fb"
	I0127 11:33:34.077483   60608 cri.go:89] found id: "745d2dc203d2a6a3963920aa88049cf50d6ad6cb186e9782f5e2afe41c3ff84b"
	I0127 11:33:34.077490   60608 cri.go:89] found id: "2ffcce7727f3745431b7444cf89fc00c0ee7497937665bc92d12a40377390157"
	I0127 11:33:34.077495   60608 cri.go:89] found id: "347e50c706723add6c69c1bfeb19290636137a1e8765b41976cda1b16ed4076b"
	I0127 11:33:34.077500   60608 cri.go:89] found id: "13eaf4245fb539e865ee03fefd604b4b88fe4ff8af14b5c168acea7eb3f401be"
	I0127 11:33:34.077505   60608 cri.go:89] found id: "20e60f86899ccc2f414ff0642e113f31da0728a7b8375834767fbecc9be0c358"
	I0127 11:33:34.077509   60608 cri.go:89] found id: "1383f1d93fdba8af8ad1360ce250b50c269ebbf4b6c6fa1895494ae9968dadcb"
	I0127 11:33:34.077513   60608 cri.go:89] found id: "f72a1bdee26af527c97f25b5afd7ef636cba09b54fb369ae2a88f66006e1eb76"
	I0127 11:33:34.077517   60608 cri.go:89] found id: "0574c0c89c037a6f4a9e6f77dd5a5fb3dbb4526bb496e0e10a98db0cabdc5aae"
	I0127 11:33:34.077526   60608 cri.go:89] found id: "506354c5ff5e7ac4c31a161e4a512782957781f5a355d36b8b16aa8011149b3b"
	I0127 11:33:34.077531   60608 cri.go:89] found id: "1064451fc9f3850cee5e45dbbd6baea628acfab95608e700033ce004d3377c44"
	I0127 11:33:34.077534   60608 cri.go:89] found id: "5cb36e64d3ac093b2b4031fdd0eeedbf3b409bea7fc791055f42729930ad4409"
	I0127 11:33:34.077539   60608 cri.go:89] found id: ""
	I0127 11:33:34.077585   60608 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-900843 -n pause-900843
helpers_test.go:261: (dbg) Run:  kubectl --context pause-900843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-900843 -n pause-900843
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-900843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-900843 logs -n 25: (1.269629531s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p test-preload-858946         | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:27 UTC | 27 Jan 25 11:28 UTC |
	| start   | -p test-preload-858946         | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:28 UTC | 27 Jan 25 11:29 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| image   | test-preload-858946 image list | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:29 UTC |
	| delete  | -p test-preload-858946         | test-preload-858946       | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:29 UTC |
	| start   | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:29 UTC | 27 Jan 25 11:30 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC | 27 Jan 25 11:30 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:30 UTC | 27 Jan 25 11:31 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-794344       | scheduled-stop-794344     | jenkins | v1.35.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:31 UTC |
	| start   | -p pause-900843 --memory=2048  | pause-900843              | jenkins | v1.35.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:33 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-880670         | offline-crio-880670       | jenkins | v1.35.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:32 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-943115      | minikube                  | jenkins | v1.26.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:33 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p running-upgrade-968925      | minikube                  | jenkins | v1.26.0 | 27 Jan 25 11:31 UTC | 27 Jan 25 11:33 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-880670         | offline-crio-880670       | jenkins | v1.35.0 | 27 Jan 25 11:32 UTC | 27 Jan 25 11:32 UTC |
	| start   | -p kubernetes-upgrade-480798   | kubernetes-upgrade-480798 | jenkins | v1.35.0 | 27 Jan 25 11:32 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-943115 stop    | minikube                  | jenkins | v1.26.0 | 27 Jan 25 11:33 UTC |                     |
	| start   | -p pause-900843                | pause-900843              | jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:34 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-968925      | running-upgrade-968925    | jenkins | v1.35.0 | 27 Jan 25 11:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:33:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:33:25.898228   60753 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:33:25.898509   60753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:33:25.898519   60753 out.go:358] Setting ErrFile to fd 2...
	I0127 11:33:25.898525   60753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:33:25.898734   60753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:33:25.899369   60753 out.go:352] Setting JSON to false
	I0127 11:33:25.900354   60753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8106,"bootTime":1737969500,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:33:25.900458   60753 start.go:139] virtualization: kvm guest
	I0127 11:33:25.902778   60753 out.go:177] * [running-upgrade-968925] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:33:25.904073   60753 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:33:25.904076   60753 notify.go:220] Checking for updates...
	I0127 11:33:25.905356   60753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:33:25.906600   60753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:33:25.907856   60753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:33:25.909215   60753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:33:25.910484   60753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:33:25.912170   60753 config.go:182] Loaded profile config "running-upgrade-968925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:33:25.912663   60753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:25.912725   60753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:25.928304   60753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
	I0127 11:33:25.928812   60753 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:25.929395   60753 main.go:141] libmachine: Using API Version  1
	I0127 11:33:25.929414   60753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:25.930004   60753 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:25.930203   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:25.932148   60753 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:33:25.933533   60753 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:33:25.934072   60753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:25.934120   60753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:25.951511   60753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0127 11:33:25.952029   60753 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:25.952519   60753 main.go:141] libmachine: Using API Version  1
	I0127 11:33:25.952542   60753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:25.952906   60753 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:25.953120   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:25.987581   60753 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:33:25.989063   60753 start.go:297] selected driver: kvm2
	I0127 11:33:25.989095   60753 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-968925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-968
925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 11:33:25.989243   60753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:33:25.990374   60753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:33:25.990494   60753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:33:26.010130   60753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:33:26.010658   60753 cni.go:84] Creating CNI manager for ""
	I0127 11:33:26.010727   60753 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:26.010810   60753 start.go:340] cluster config:
	{Name:running-upgrade-968925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-968925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 11:33:26.010959   60753 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:33:26.012968   60753 out.go:177] * Starting "running-upgrade-968925" primary control-plane node in "running-upgrade-968925" cluster
	I0127 11:33:24.166712   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) Calling .GetIP
	I0127 11:33:24.169394   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:24.169753   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:4c:2c", ip: ""} in network mk-kubernetes-upgrade-480798: {Iface:virbr1 ExpiryTime:2025-01-27 12:33:13 +0000 UTC Type:0 Mac:52:54:00:19:4c:2c Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:kubernetes-upgrade-480798 Clientid:01:52:54:00:19:4c:2c}
	I0127 11:33:24.169776   60315 main.go:141] libmachine: (kubernetes-upgrade-480798) DBG | domain kubernetes-upgrade-480798 has defined IP address 192.168.83.73 and MAC address 52:54:00:19:4c:2c in network mk-kubernetes-upgrade-480798
	I0127 11:33:24.169978   60315 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:24.173899   60315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:33:24.185980   60315 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:33:24.186105   60315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:33:24.186163   60315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:24.217311   60315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:33:24.217389   60315 ssh_runner.go:195] Run: which lz4
	I0127 11:33:24.221400   60315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:33:24.225509   60315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:33:24.225538   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:33:25.723293   60315 crio.go:462] duration metric: took 1.501912534s to copy over tarball
	I0127 11:33:25.723373   60315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:33:26.014112   60753 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0127 11:33:26.014160   60753 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:33:26.014174   60753 cache.go:56] Caching tarball of preloaded images
	I0127 11:33:26.014285   60753 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:33:26.014305   60753 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0127 11:33:26.014439   60753 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/running-upgrade-968925/config.json ...
	I0127 11:33:26.014649   60753 start.go:360] acquireMachinesLock for running-upgrade-968925: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:33:29.396308   60753 start.go:364] duration metric: took 3.381595049s to acquireMachinesLock for "running-upgrade-968925"
	I0127 11:33:29.396382   60753 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:33:29.396393   60753 fix.go:54] fixHost starting: 
	I0127 11:33:29.396838   60753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:33:29.396897   60753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:33:29.415599   60753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0127 11:33:29.416140   60753 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:33:29.416674   60753 main.go:141] libmachine: Using API Version  1
	I0127 11:33:29.416699   60753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:33:29.417058   60753 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:33:29.417266   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:29.417417   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetState
	I0127 11:33:29.419226   60753 fix.go:112] recreateIfNeeded on running-upgrade-968925: state=Running err=<nil>
	W0127 11:33:29.419246   60753 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:33:29.533893   60753 out.go:177] * Updating the running kvm2 "running-upgrade-968925" VM ...
	I0127 11:33:29.651403   60753 machine.go:93] provisionDockerMachine start ...
	I0127 11:33:29.651457   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:29.651793   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:29.654864   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.655307   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:29.655353   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.655553   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:29.655766   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.655932   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.656072   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:29.656229   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.656493   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:29.656507   60753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:33:29.776045   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-968925
	
	I0127 11:33:29.776075   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetMachineName
	I0127 11:33:29.776310   60753 buildroot.go:166] provisioning hostname "running-upgrade-968925"
	I0127 11:33:29.776337   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetMachineName
	I0127 11:33:29.776540   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:29.779769   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.780260   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:29.780288   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.780417   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:29.780596   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.780792   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.781001   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:29.781196   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.781414   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:29.781434   60753 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-968925 && echo "running-upgrade-968925" | sudo tee /etc/hostname
	I0127 11:33:29.911865   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-968925
	
	I0127 11:33:29.911908   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:29.915233   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.915697   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:29.915727   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:29.916014   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:29.916219   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.916407   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:29.916610   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:29.916807   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.916991   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:29.917022   60753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-968925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-968925/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-968925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:33:30.032352   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:33:30.032383   60753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:33:30.032404   60753 buildroot.go:174] setting up certificates
	I0127 11:33:30.032418   60753 provision.go:84] configureAuth start
	I0127 11:33:30.032430   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetMachineName
	I0127 11:33:30.032743   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetIP
	I0127 11:33:30.035674   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.036003   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.036024   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.036190   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:30.038813   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.039238   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.039266   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.039390   60753 provision.go:143] copyHostCerts
	I0127 11:33:30.039461   60753 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:33:30.039472   60753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:33:30.039523   60753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:33:30.039655   60753 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:33:30.039666   60753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:33:30.039690   60753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:33:30.039781   60753 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:33:30.039791   60753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:33:30.039813   60753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:33:30.039887   60753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-968925 san=[127.0.0.1 192.168.72.228 localhost minikube running-upgrade-968925]
	I0127 11:33:30.313182   60753 provision.go:177] copyRemoteCerts
	I0127 11:33:30.313251   60753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:33:30.313277   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:30.316120   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.316435   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.316489   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.316958   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:30.317211   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:30.317430   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:30.317619   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	I0127 11:33:30.407238   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:33:30.433269   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:33:30.463153   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:33:30.584712   60753 provision.go:87] duration metric: took 552.281977ms to configureAuth
	I0127 11:33:30.584742   60753 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:33:30.584942   60753 config.go:182] Loaded profile config "running-upgrade-968925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:33:30.585069   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:30.588420   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.588856   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:30.588887   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:30.589090   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:30.589342   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:30.589502   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:30.589654   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:30.589828   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:30.590025   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:30.590044   60753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:33:28.192368   60315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.468964657s)
	I0127 11:33:28.192394   60315 crio.go:469] duration metric: took 2.469070397s to extract the tarball
	I0127 11:33:28.192404   60315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:33:28.233159   60315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:28.276108   60315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:33:28.276139   60315 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:33:28.276238   60315 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:28.276244   60315 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.276271   60315 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.276275   60315 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:33:28.276286   60315 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.276247   60315 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.276298   60315 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.276254   60315 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.277901   60315 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.277925   60315 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.277903   60315 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:28.277902   60315 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.277901   60315 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.277907   60315 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:33:28.277927   60315 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.277902   60315 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.427872   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.428061   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.434839   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.440570   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.457387   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.459239   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.500292   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:33:28.519394   60315 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:33:28.519450   60315 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.519466   60315 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:33:28.519501   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.519501   60315 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.519631   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.524827   60315 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:33:28.524864   60315 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.524907   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.572604   60315 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:33:28.572660   60315 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.572701   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.594543   60315 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:33:28.594591   60315 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.594640   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.594676   60315 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:33:28.594711   60315 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.594744   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.604978   60315 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:33:28.605007   60315 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:33:28.605028   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.605042   60315 ssh_runner.go:195] Run: which crictl
	I0127 11:33:28.605103   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.605161   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.605178   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.605235   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.605280   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.725558   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.725597   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.725601   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.725707   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.725760   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.725793   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:28.725820   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.841925   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:33:28.862363   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:33:28.869074   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:33:28.869108   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:33:28.869120   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:33:28.869200   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:33:28.869288   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:28.933917   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:33:28.985479   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:33:28.998015   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:33:29.008764   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:33:29.008846   60315 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:33:29.012783   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:33:29.012861   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:33:29.047571   60315 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:33:29.222794   60315 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:33:29.366916   60315 cache_images.go:92] duration metric: took 1.090751434s to LoadCachedImages
	W0127 11:33:29.367016   60315 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0127 11:33:29.367036   60315 kubeadm.go:934] updating node { 192.168.83.73 8443 v1.20.0 crio true true} ...
	I0127 11:33:29.367182   60315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-480798 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:33:29.367285   60315 ssh_runner.go:195] Run: crio config
	I0127 11:33:29.430210   60315 cni.go:84] Creating CNI manager for ""
	I0127 11:33:29.430230   60315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:29.430239   60315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:33:29.430257   60315 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.73 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-480798 NodeName:kubernetes-upgrade-480798 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:33:29.430387   60315 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-480798"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:33:29.430463   60315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:33:29.440428   60315 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:33:29.440483   60315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:33:29.450433   60315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0127 11:33:29.466059   60315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:33:29.480733   60315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:33:29.497078   60315 ssh_runner.go:195] Run: grep 192.168.83.73	control-plane.minikube.internal$ /etc/hosts
	I0127 11:33:29.500859   60315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:33:29.514576   60315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:29.643945   60315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:33:29.663067   60315 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798 for IP: 192.168.83.73
	I0127 11:33:29.663088   60315 certs.go:194] generating shared ca certs ...
	I0127 11:33:29.663106   60315 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.663261   60315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:33:29.663315   60315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:33:29.663336   60315 certs.go:256] generating profile certs ...
	I0127 11:33:29.663446   60315 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key
	I0127 11:33:29.663471   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt with IP's: []
	I0127 11:33:29.800004   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt ...
	I0127 11:33:29.800038   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.crt: {Name:mkaa6ca211b0e39160992b60e71795f794b4fa57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.800243   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key ...
	I0127 11:33:29.800267   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/client.key: {Name:mkba3526bbc1c913be01a6bc4ce4e3baf78ed28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.800412   60315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c
	I0127 11:33:29.800436   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.73]
	I0127 11:33:29.963202   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c ...
	I0127 11:33:29.963227   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c: {Name:mk647f7a7f5a0dabbc21fe291d29db85829b422f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.963364   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c ...
	I0127 11:33:29.963378   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c: {Name:mkbbe66814ffa44807139b1c6c8df1cbfe9d85f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:29.963443   60315 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt.f7cb7a4c -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt
	I0127 11:33:29.963520   60315 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key.f7cb7a4c -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key
	I0127 11:33:29.963577   60315 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key
	I0127 11:33:29.963591   60315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt with IP's: []
	I0127 11:33:30.061333   60315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt ...
	I0127 11:33:30.061361   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt: {Name:mk13a4dcb74d04f521c59b139c0faacce5465377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:30.061519   60315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key ...
	I0127 11:33:30.061536   60315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key: {Name:mk309b6c9e6da261ab0aecbaa4e7871ee8cdd22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:30.061732   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:33:30.061781   60315 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:33:30.061794   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:33:30.061833   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:33:30.061869   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:33:30.061901   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:33:30.061956   60315 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:30.062539   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:33:30.094530   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:33:30.121255   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:33:30.147323   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:33:30.175961   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 11:33:30.203496   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:33:30.228391   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:33:30.256448   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kubernetes-upgrade-480798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:33:30.282667   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:33:30.305716   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:33:30.334009   60315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:33:30.359897   60315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:33:30.379030   60315 ssh_runner.go:195] Run: openssl version
	I0127 11:33:30.386661   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:33:30.399285   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.404091   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.404156   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:30.411200   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:33:30.424652   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:33:30.439225   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.444544   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.444608   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:33:30.451131   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:33:30.465772   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:33:30.476581   60315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.481472   60315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.481535   60315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:33:30.487353   60315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:33:30.502830   60315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:33:30.508448   60315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:33:30.508513   60315 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-480798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-480798 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:30.508611   60315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:33:30.508664   60315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:33:30.563952   60315 cri.go:89] found id: ""
	I0127 11:33:30.564015   60315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:33:30.580363   60315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:33:30.601987   60315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:33:30.620766   60315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:33:30.620788   60315 kubeadm.go:157] found existing configuration files:
	
	I0127 11:33:30.620841   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:33:30.634563   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:33:30.634639   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:33:30.645365   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:33:30.657896   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:33:30.657960   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:33:30.669588   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:33:30.679304   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:33:30.679367   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:33:30.688972   60315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:33:30.697895   60315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:33:30.697950   60315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:33:30.708950   60315 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:33:30.840143   60315 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:33:30.840245   60315 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:33:30.968066   60315 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:33:30.968191   60315 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:33:30.968338   60315 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:33:31.140896   60315 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:33:29.144459   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:33:29.144490   60608 machine.go:96] duration metric: took 6.378062966s to provisionDockerMachine
	I0127 11:33:29.144505   60608 start.go:293] postStartSetup for "pause-900843" (driver="kvm2")
	I0127 11:33:29.144518   60608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:33:29.144539   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.144843   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:33:29.144869   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.147518   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.147857   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.147896   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.148013   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.148187   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.148342   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.148441   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.233611   60608 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:33:29.237809   60608 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:33:29.237835   60608 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:33:29.237892   60608 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:33:29.237987   60608 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:33:29.238115   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:33:29.247346   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:29.274787   60608 start.go:296] duration metric: took 130.266105ms for postStartSetup
	I0127 11:33:29.274839   60608 fix.go:56] duration metric: took 6.530798739s for fixHost
	I0127 11:33:29.274862   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.278195   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.278722   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.278758   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.278980   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.279162   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.279341   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.279491   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.279663   60608 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:29.279831   60608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0127 11:33:29.279844   60608 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:33:29.396114   60608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977609.374897195
	
	I0127 11:33:29.396137   60608 fix.go:216] guest clock: 1737977609.374897195
	I0127 11:33:29.396151   60608 fix.go:229] Guest: 2025-01-27 11:33:29.374897195 +0000 UTC Remote: 2025-01-27 11:33:29.274843307 +0000 UTC m=+15.354336795 (delta=100.053888ms)
	I0127 11:33:29.396176   60608 fix.go:200] guest clock delta is within tolerance: 100.053888ms
	I0127 11:33:29.396183   60608 start.go:83] releasing machines lock for "pause-900843", held for 6.652178072s
	I0127 11:33:29.396207   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.396466   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:29.399408   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.399799   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.399826   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.400009   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400559   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400751   60608 main.go:141] libmachine: (pause-900843) Calling .DriverName
	I0127 11:33:29.400874   60608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:33:29.400923   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.400955   60608 ssh_runner.go:195] Run: cat /version.json
	I0127 11:33:29.400977   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHHostname
	I0127 11:33:29.403665   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.403998   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.404026   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404049   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404241   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.404423   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.404481   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:29.404512   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:29.404710   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.404710   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHPort
	I0127 11:33:29.404854   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHKeyPath
	I0127 11:33:29.404902   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.405033   60608 main.go:141] libmachine: (pause-900843) Calling .GetSSHUsername
	I0127 11:33:29.405143   60608 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/pause-900843/id_rsa Username:docker}
	I0127 11:33:29.513057   60608 ssh_runner.go:195] Run: systemctl --version
	I0127 11:33:29.518966   60608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:33:29.677793   60608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:33:29.687702   60608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:33:29.687796   60608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:33:29.700107   60608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 11:33:29.700132   60608 start.go:495] detecting cgroup driver to use...
	I0127 11:33:29.700206   60608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:33:29.720239   60608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:33:29.736759   60608 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:33:29.736864   60608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:33:29.751575   60608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:33:29.766382   60608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:33:29.929606   60608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:33:30.068031   60608 docker.go:233] disabling docker service ...
	I0127 11:33:30.068092   60608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:33:30.090234   60608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:33:30.104920   60608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:33:30.267073   60608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:33:30.410099   60608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:33:30.426262   60608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:33:30.448794   60608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:33:30.448851   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.461453   60608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:33:30.461514   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.476685   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.488442   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.498740   60608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:33:30.512774   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.526619   60608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.539865   60608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:30.551635   60608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:33:30.562250   60608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:33:30.572647   60608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:30.724876   60608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:33:31.947292   60608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.222375748s)
	I0127 11:33:31.947338   60608 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:33:31.947399   60608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:33:31.964628   60608 start.go:563] Will wait 60s for crictl version
	I0127 11:33:31.964697   60608 ssh_runner.go:195] Run: which crictl
	I0127 11:33:31.971037   60608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:33:32.104049   60608 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:33:32.104148   60608 ssh_runner.go:195] Run: crio --version
	I0127 11:33:32.356574   60608 ssh_runner.go:195] Run: crio --version
	I0127 11:33:32.594082   60608 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:33:31.293692   60315 out.go:235]   - Generating certificates and keys ...
	I0127 11:33:31.293813   60315 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:33:31.293920   60315 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:33:31.294024   60315 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:33:31.694694   60315 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:33:31.821080   60315 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:33:32.143166   60315 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:33:32.197137   60315 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:33:32.197479   60315 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	I0127 11:33:32.425895   60315 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:33:32.426224   60315 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-480798 localhost] and IPs [192.168.83.73 127.0.0.1 ::1]
	I0127 11:33:32.589528   60315 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:33:32.778137   60315 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:33:32.595350   60608 main.go:141] libmachine: (pause-900843) Calling .GetIP
	I0127 11:33:32.598873   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:32.599218   60608 main.go:141] libmachine: (pause-900843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:6c:53", ip: ""} in network mk-pause-900843: {Iface:virbr2 ExpiryTime:2025-01-27 12:32:09 +0000 UTC Type:0 Mac:52:54:00:73:6c:53 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-900843 Clientid:01:52:54:00:73:6c:53}
	I0127 11:33:32.599253   60608 main.go:141] libmachine: (pause-900843) DBG | domain pause-900843 has defined IP address 192.168.50.246 and MAC address 52:54:00:73:6c:53 in network mk-pause-900843
	I0127 11:33:32.599510   60608 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:32.624423   60608 kubeadm.go:883] updating cluster {Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:33:32.624616   60608 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:33:32.624686   60608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:32.763015   60608 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:33:32.763047   60608 crio.go:433] Images already preloaded, skipping extraction
	I0127 11:33:32.763107   60608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:32.879425   60608 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:33:32.879449   60608 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:33:32.879459   60608 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.32.1 crio true true} ...
	I0127 11:33:32.879577   60608 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-900843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:33:32.879673   60608 ssh_runner.go:195] Run: crio config
	I0127 11:33:33.004770   60608 cni.go:84] Creating CNI manager for ""
	I0127 11:33:33.004798   60608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:33:33.004812   60608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:33:33.004845   60608 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-900843 NodeName:pause-900843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:33:33.005056   60608 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-900843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:33:33.005120   60608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:33:33.053115   60608 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:33:33.053188   60608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:33:33.063741   60608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 11:33:33.082239   60608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:33:33.155905   60608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 11:33:33.190014   60608 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0127 11:33:33.194134   60608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:33.428935   60608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:33:33.463354   60608 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843 for IP: 192.168.50.246
	I0127 11:33:33.463375   60608 certs.go:194] generating shared ca certs ...
	I0127 11:33:33.463394   60608 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:33:33.463564   60608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:33:33.463652   60608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:33:33.463669   60608 certs.go:256] generating profile certs ...
	I0127 11:33:33.463840   60608 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/client.key
	I0127 11:33:33.463939   60608 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.key.ff28fce8
	I0127 11:33:33.463981   60608 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.key
	I0127 11:33:33.464081   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:33:33.464119   60608 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:33:33.464129   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:33:33.464162   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:33:33.464195   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:33:33.464226   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:33:33.464280   60608 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:33.465040   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:33:33.500924   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:33:33.535736   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:33:33.568188   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:33:33.601451   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:33:33.626636   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:33:33.650258   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:33:33.698501   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/pause-900843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:33:33.726687   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:33:33.756031   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:33:33.781042   60608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:33:33.807550   60608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:33:33.833954   60608 ssh_runner.go:195] Run: openssl version
	I0127 11:33:33.840318   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:33:33.856131   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.860824   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.860917   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:33:33.867171   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:33:33.879120   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:33:33.890622   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.895292   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.895350   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:33:33.900938   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:33:33.910937   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:33:33.922304   60608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.927290   60608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.927347   60608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:33:33.933682   60608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:33:33.947503   60608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:33:33.953994   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:33:33.160573   60315 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:33:33.160670   60315 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:33:33.224218   60315 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:33:33.788353   60315 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:33:33.899841   60315 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:33:33.976565   60315 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:33:33.993549   60315 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:33:33.994045   60315 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:33:33.994107   60315 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:33:34.115038   60315 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:33:32.022481   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:33:32.022525   60753 machine.go:96] duration metric: took 2.3710731s to provisionDockerMachine
	I0127 11:33:32.022540   60753 start.go:293] postStartSetup for "running-upgrade-968925" (driver="kvm2")
	I0127 11:33:32.022554   60753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:33:32.022576   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.022884   60753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:33:32.022923   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.025998   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.026371   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.026413   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.026690   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.026910   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.027109   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.027238   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	I0127 11:33:32.115577   60753 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:33:32.120297   60753 info.go:137] Remote host: Buildroot 2021.02.12
	I0127 11:33:32.120324   60753 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:33:32.120403   60753 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:33:32.120509   60753 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:33:32.120631   60753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:33:32.129361   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:33:32.157413   60753 start.go:296] duration metric: took 134.852355ms for postStartSetup
	I0127 11:33:32.157464   60753 fix.go:56] duration metric: took 2.7610705s for fixHost
	I0127 11:33:32.157503   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.160332   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.160757   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.160789   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.161055   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.161278   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.161437   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.161590   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.161742   60753 main.go:141] libmachine: Using SSH client type: native
	I0127 11:33:32.161938   60753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0127 11:33:32.161951   60753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:33:32.285000   60753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977612.263526312
	
	I0127 11:33:32.285025   60753 fix.go:216] guest clock: 1737977612.263526312
	I0127 11:33:32.285033   60753 fix.go:229] Guest: 2025-01-27 11:33:32.263526312 +0000 UTC Remote: 2025-01-27 11:33:32.157469968 +0000 UTC m=+6.307211810 (delta=106.056344ms)
	I0127 11:33:32.285061   60753 fix.go:200] guest clock delta is within tolerance: 106.056344ms
	I0127 11:33:32.285071   60753 start.go:83] releasing machines lock for "running-upgrade-968925", held for 2.888710729s
	I0127 11:33:32.285094   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.285357   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetIP
	I0127 11:33:32.288444   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.288934   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.288962   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.289160   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.289687   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.289854   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .DriverName
	I0127 11:33:32.289934   60753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:33:32.289983   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.290256   60753 ssh_runner.go:195] Run: cat /version.json
	I0127 11:33:32.290277   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHHostname
	I0127 11:33:32.293060   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.293422   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.293463   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.293594   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.293643   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.293787   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.293963   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.294064   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	I0127 11:33:32.294342   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:32.294373   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:32.294419   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHPort
	I0127 11:33:32.294567   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHKeyPath
	I0127 11:33:32.294709   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetSSHUsername
	I0127 11:33:32.294828   60753 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/running-upgrade-968925/id_rsa Username:docker}
	W0127 11:33:32.406391   60753 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0127 11:33:32.406481   60753 ssh_runner.go:195] Run: systemctl --version
	I0127 11:33:32.411803   60753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:33:32.607162   60753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:33:32.614146   60753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:33:32.614209   60753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:33:32.629721   60753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:33:32.629739   60753 start.go:495] detecting cgroup driver to use...
	I0127 11:33:32.629804   60753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:33:32.649185   60753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:33:32.668326   60753 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:33:32.668395   60753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:33:32.685821   60753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:33:32.698804   60753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:33:32.856379   60753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:33:33.013882   60753 docker.go:233] disabling docker service ...
	I0127 11:33:33.013959   60753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:33:33.026052   60753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:33:33.038472   60753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:33:33.181511   60753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:33:33.325886   60753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:33:33.341457   60753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:33:33.363109   60753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 11:33:33.363180   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.380863   60753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:33:33.380943   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.404474   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.413549   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.422753   60753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:33:33.435725   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.448044   60753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.464740   60753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:33:33.478143   60753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:33:33.488185   60753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:33:33.499059   60753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:33:33.741704   60753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:33:34.502655   60753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:33:34.502727   60753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:33:34.508164   60753 start.go:563] Will wait 60s for crictl version
	I0127 11:33:34.508225   60753 ssh_runner.go:195] Run: which crictl
	I0127 11:33:34.511843   60753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:33:34.550104   60753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0127 11:33:34.550183   60753 ssh_runner.go:195] Run: crio --version
	I0127 11:33:34.589506   60753 ssh_runner.go:195] Run: crio --version
	I0127 11:33:34.632821   60753 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0127 11:33:34.634185   60753 main.go:141] libmachine: (running-upgrade-968925) Calling .GetIP
	I0127 11:33:34.637149   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:34.637529   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:83:c9", ip: ""} in network mk-running-upgrade-968925: {Iface:virbr4 ExpiryTime:2025-01-27 12:32:50 +0000 UTC Type:0 Mac:52:54:00:36:83:c9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:running-upgrade-968925 Clientid:01:52:54:00:36:83:c9}
	I0127 11:33:34.637560   60753 main.go:141] libmachine: (running-upgrade-968925) DBG | domain running-upgrade-968925 has defined IP address 192.168.72.228 and MAC address 52:54:00:36:83:c9 in network mk-running-upgrade-968925
	I0127 11:33:34.637846   60753 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 11:33:34.641492   60753 kubeadm.go:883] updating cluster {Name:running-upgrade-968925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-968925 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0127 11:33:34.641616   60753 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0127 11:33:34.641669   60753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:33:34.674593   60753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0127 11:33:34.674649   60753 ssh_runner.go:195] Run: which lz4
	I0127 11:33:34.683097   60753 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:33:34.687363   60753 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:33:34.687394   60753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0127 11:33:34.192855   60315 out.go:235]   - Booting up control plane ...
	I0127 11:33:34.193031   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:33:34.193145   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:33:34.193251   60315 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:33:34.193399   60315 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:33:34.193617   60315 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:33:33.960197   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:33:33.967566   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:33:33.974130   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:33:33.980629   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:33:33.986481   60608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:33:33.992283   60608 kubeadm.go:392] StartCluster: {Name:pause-900843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-900843 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:33:33.992430   60608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:33:33.992507   60608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:33:34.077457   60608 cri.go:89] found id: "19636b08b7622d750f5054d7b2d51fec4669f6c31448987e876b47e95d1eb0fb"
	I0127 11:33:34.077483   60608 cri.go:89] found id: "745d2dc203d2a6a3963920aa88049cf50d6ad6cb186e9782f5e2afe41c3ff84b"
	I0127 11:33:34.077490   60608 cri.go:89] found id: "2ffcce7727f3745431b7444cf89fc00c0ee7497937665bc92d12a40377390157"
	I0127 11:33:34.077495   60608 cri.go:89] found id: "347e50c706723add6c69c1bfeb19290636137a1e8765b41976cda1b16ed4076b"
	I0127 11:33:34.077500   60608 cri.go:89] found id: "13eaf4245fb539e865ee03fefd604b4b88fe4ff8af14b5c168acea7eb3f401be"
	I0127 11:33:34.077505   60608 cri.go:89] found id: "20e60f86899ccc2f414ff0642e113f31da0728a7b8375834767fbecc9be0c358"
	I0127 11:33:34.077509   60608 cri.go:89] found id: "1383f1d93fdba8af8ad1360ce250b50c269ebbf4b6c6fa1895494ae9968dadcb"
	I0127 11:33:34.077513   60608 cri.go:89] found id: "f72a1bdee26af527c97f25b5afd7ef636cba09b54fb369ae2a88f66006e1eb76"
	I0127 11:33:34.077517   60608 cri.go:89] found id: "0574c0c89c037a6f4a9e6f77dd5a5fb3dbb4526bb496e0e10a98db0cabdc5aae"
	I0127 11:33:34.077526   60608 cri.go:89] found id: "506354c5ff5e7ac4c31a161e4a512782957781f5a355d36b8b16aa8011149b3b"
	I0127 11:33:34.077531   60608 cri.go:89] found id: "1064451fc9f3850cee5e45dbbd6baea628acfab95608e700033ce004d3377c44"
	I0127 11:33:34.077534   60608 cri.go:89] found id: "5cb36e64d3ac093b2b4031fdd0eeedbf3b409bea7fc791055f42729930ad4409"
	I0127 11:33:34.077539   60608 cri.go:89] found id: ""
	I0127 11:33:34.077585   60608 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-900843 -n pause-900843
helpers_test.go:261: (dbg) Run:  kubectl --context pause-900843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (291.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-570778 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-570778 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m51.364357219s)

                                                
                                                
-- stdout --
	* [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:37:40.635455   66618 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:37:40.635544   66618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:37:40.635548   66618 out.go:358] Setting ErrFile to fd 2...
	I0127 11:37:40.635551   66618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:37:40.635749   66618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:37:40.636352   66618 out.go:352] Setting JSON to false
	I0127 11:37:40.637261   66618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8361,"bootTime":1737969500,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:37:40.637349   66618 start.go:139] virtualization: kvm guest
	I0127 11:37:40.639724   66618 out.go:177] * [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:37:40.641216   66618 notify.go:220] Checking for updates...
	I0127 11:37:40.641297   66618 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:37:40.642681   66618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:37:40.643964   66618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:37:40.645144   66618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:37:40.646343   66618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:37:40.647441   66618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:37:40.649004   66618 config.go:182] Loaded profile config "NoKubernetes-200407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0127 11:37:40.649126   66618 config.go:182] Loaded profile config "cert-expiration-091274": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:37:40.649240   66618 config.go:182] Loaded profile config "kubernetes-upgrade-480798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:37:40.649380   66618 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:37:40.684093   66618 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 11:37:40.685303   66618 start.go:297] selected driver: kvm2
	I0127 11:37:40.685324   66618 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:37:40.685357   66618 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:37:40.686090   66618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:37:40.686168   66618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:37:40.704642   66618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:37:40.704697   66618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:37:40.704925   66618 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:37:40.704952   66618 cni.go:84] Creating CNI manager for ""
	I0127 11:37:40.704991   66618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:37:40.705003   66618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 11:37:40.705047   66618 start.go:340] cluster config:
	{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:37:40.705151   66618 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:37:40.707748   66618 out.go:177] * Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	I0127 11:37:40.708943   66618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:37:40.708974   66618 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:37:40.708980   66618 cache.go:56] Caching tarball of preloaded images
	I0127 11:37:40.709075   66618 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:37:40.709088   66618 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 11:37:40.709166   66618 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:37:40.709182   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json: {Name:mk23d3a9ca4b9d360303435eee7748f8ce432235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:37:40.709330   66618 start.go:360] acquireMachinesLock for old-k8s-version-570778: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:38:02.980412   66618 start.go:364] duration metric: took 22.271039544s to acquireMachinesLock for "old-k8s-version-570778"
	I0127 11:38:02.980476   66618 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:38:02.980606   66618 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 11:38:02.982428   66618 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:38:02.982681   66618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:38:02.982718   66618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:38:02.999176   66618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0127 11:38:02.999599   66618 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:38:03.000181   66618 main.go:141] libmachine: Using API Version  1
	I0127 11:38:03.000205   66618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:38:03.000545   66618 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:38:03.000738   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:38:03.000901   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:03.001090   66618 start.go:159] libmachine.API.Create for "old-k8s-version-570778" (driver="kvm2")
	I0127 11:38:03.001134   66618 client.go:168] LocalClient.Create starting
	I0127 11:38:03.001172   66618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem
	I0127 11:38:03.001211   66618 main.go:141] libmachine: Decoding PEM data...
	I0127 11:38:03.001231   66618 main.go:141] libmachine: Parsing certificate...
	I0127 11:38:03.001298   66618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem
	I0127 11:38:03.001329   66618 main.go:141] libmachine: Decoding PEM data...
	I0127 11:38:03.001345   66618 main.go:141] libmachine: Parsing certificate...
	I0127 11:38:03.001370   66618 main.go:141] libmachine: Running pre-create checks...
	I0127 11:38:03.001381   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .PreCreateCheck
	I0127 11:38:03.001776   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:38:03.002138   66618 main.go:141] libmachine: Creating machine...
	I0127 11:38:03.002153   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .Create
	I0127 11:38:03.002294   66618 main.go:141] libmachine: (old-k8s-version-570778) creating KVM machine...
	I0127 11:38:03.002309   66618 main.go:141] libmachine: (old-k8s-version-570778) creating network...
	I0127 11:38:03.003411   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found existing default KVM network
	I0127 11:38:03.004429   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:03.004290   66914 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:41:6f:38} reservation:<nil>}
	I0127 11:38:03.005625   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:03.005519   66914 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027c5a0}
	I0127 11:38:03.005654   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | created network xml: 
	I0127 11:38:03.005665   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | <network>
	I0127 11:38:03.005678   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |   <name>mk-old-k8s-version-570778</name>
	I0127 11:38:03.005689   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |   <dns enable='no'/>
	I0127 11:38:03.005699   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |   
	I0127 11:38:03.005711   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 11:38:03.005726   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |     <dhcp>
	I0127 11:38:03.005744   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 11:38:03.005754   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |     </dhcp>
	I0127 11:38:03.005761   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |   </ip>
	I0127 11:38:03.005768   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG |   
	I0127 11:38:03.005775   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | </network>
	I0127 11:38:03.005784   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | 
	I0127 11:38:03.011132   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | trying to create private KVM network mk-old-k8s-version-570778 192.168.50.0/24...
	I0127 11:38:03.080809   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | private KVM network mk-old-k8s-version-570778 192.168.50.0/24 created
	I0127 11:38:03.080852   66618 main.go:141] libmachine: (old-k8s-version-570778) setting up store path in /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778 ...
	I0127 11:38:03.080873   66618 main.go:141] libmachine: (old-k8s-version-570778) building disk image from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 11:38:03.080937   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:03.080813   66914 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:38:03.081016   66618 main.go:141] libmachine: (old-k8s-version-570778) Downloading /home/jenkins/minikube-integration/20319-18835/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:38:03.329583   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:03.329433   66914 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa...
	I0127 11:38:03.601085   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:03.600958   66914 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/old-k8s-version-570778.rawdisk...
	I0127 11:38:03.601120   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | Writing magic tar header
	I0127 11:38:03.601134   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | Writing SSH key tar header
	I0127 11:38:03.601146   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:03.601080   66914 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778 ...
	I0127 11:38:03.601276   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778
	I0127 11:38:03.601313   66618 main.go:141] libmachine: (old-k8s-version-570778) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778 (perms=drwx------)
	I0127 11:38:03.601333   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines
	I0127 11:38:03.601348   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:38:03.601364   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835
	I0127 11:38:03.601379   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 11:38:03.601397   66618 main.go:141] libmachine: (old-k8s-version-570778) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines (perms=drwxr-xr-x)
	I0127 11:38:03.601419   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home/jenkins
	I0127 11:38:03.601433   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | checking permissions on dir: /home
	I0127 11:38:03.601443   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | skipping /home - not owner
	I0127 11:38:03.601464   66618 main.go:141] libmachine: (old-k8s-version-570778) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube (perms=drwxr-xr-x)
	I0127 11:38:03.601480   66618 main.go:141] libmachine: (old-k8s-version-570778) setting executable bit set on /home/jenkins/minikube-integration/20319-18835 (perms=drwxrwxr-x)
	I0127 11:38:03.601493   66618 main.go:141] libmachine: (old-k8s-version-570778) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 11:38:03.601504   66618 main.go:141] libmachine: (old-k8s-version-570778) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 11:38:03.601517   66618 main.go:141] libmachine: (old-k8s-version-570778) creating domain...
	I0127 11:38:03.602592   66618 main.go:141] libmachine: (old-k8s-version-570778) define libvirt domain using xml: 
	I0127 11:38:03.602615   66618 main.go:141] libmachine: (old-k8s-version-570778) <domain type='kvm'>
	I0127 11:38:03.602626   66618 main.go:141] libmachine: (old-k8s-version-570778)   <name>old-k8s-version-570778</name>
	I0127 11:38:03.602633   66618 main.go:141] libmachine: (old-k8s-version-570778)   <memory unit='MiB'>2200</memory>
	I0127 11:38:03.602642   66618 main.go:141] libmachine: (old-k8s-version-570778)   <vcpu>2</vcpu>
	I0127 11:38:03.602655   66618 main.go:141] libmachine: (old-k8s-version-570778)   <features>
	I0127 11:38:03.602663   66618 main.go:141] libmachine: (old-k8s-version-570778)     <acpi/>
	I0127 11:38:03.602675   66618 main.go:141] libmachine: (old-k8s-version-570778)     <apic/>
	I0127 11:38:03.602681   66618 main.go:141] libmachine: (old-k8s-version-570778)     <pae/>
	I0127 11:38:03.602689   66618 main.go:141] libmachine: (old-k8s-version-570778)     
	I0127 11:38:03.602695   66618 main.go:141] libmachine: (old-k8s-version-570778)   </features>
	I0127 11:38:03.602701   66618 main.go:141] libmachine: (old-k8s-version-570778)   <cpu mode='host-passthrough'>
	I0127 11:38:03.602706   66618 main.go:141] libmachine: (old-k8s-version-570778)   
	I0127 11:38:03.602715   66618 main.go:141] libmachine: (old-k8s-version-570778)   </cpu>
	I0127 11:38:03.602720   66618 main.go:141] libmachine: (old-k8s-version-570778)   <os>
	I0127 11:38:03.602726   66618 main.go:141] libmachine: (old-k8s-version-570778)     <type>hvm</type>
	I0127 11:38:03.602734   66618 main.go:141] libmachine: (old-k8s-version-570778)     <boot dev='cdrom'/>
	I0127 11:38:03.602741   66618 main.go:141] libmachine: (old-k8s-version-570778)     <boot dev='hd'/>
	I0127 11:38:03.602751   66618 main.go:141] libmachine: (old-k8s-version-570778)     <bootmenu enable='no'/>
	I0127 11:38:03.602762   66618 main.go:141] libmachine: (old-k8s-version-570778)   </os>
	I0127 11:38:03.602770   66618 main.go:141] libmachine: (old-k8s-version-570778)   <devices>
	I0127 11:38:03.602779   66618 main.go:141] libmachine: (old-k8s-version-570778)     <disk type='file' device='cdrom'>
	I0127 11:38:03.602794   66618 main.go:141] libmachine: (old-k8s-version-570778)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/boot2docker.iso'/>
	I0127 11:38:03.602805   66618 main.go:141] libmachine: (old-k8s-version-570778)       <target dev='hdc' bus='scsi'/>
	I0127 11:38:03.602839   66618 main.go:141] libmachine: (old-k8s-version-570778)       <readonly/>
	I0127 11:38:03.602869   66618 main.go:141] libmachine: (old-k8s-version-570778)     </disk>
	I0127 11:38:03.602881   66618 main.go:141] libmachine: (old-k8s-version-570778)     <disk type='file' device='disk'>
	I0127 11:38:03.602905   66618 main.go:141] libmachine: (old-k8s-version-570778)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 11:38:03.602946   66618 main.go:141] libmachine: (old-k8s-version-570778)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/old-k8s-version-570778.rawdisk'/>
	I0127 11:38:03.602972   66618 main.go:141] libmachine: (old-k8s-version-570778)       <target dev='hda' bus='virtio'/>
	I0127 11:38:03.602987   66618 main.go:141] libmachine: (old-k8s-version-570778)     </disk>
	I0127 11:38:03.602997   66618 main.go:141] libmachine: (old-k8s-version-570778)     <interface type='network'>
	I0127 11:38:03.603012   66618 main.go:141] libmachine: (old-k8s-version-570778)       <source network='mk-old-k8s-version-570778'/>
	I0127 11:38:03.603023   66618 main.go:141] libmachine: (old-k8s-version-570778)       <model type='virtio'/>
	I0127 11:38:03.603036   66618 main.go:141] libmachine: (old-k8s-version-570778)     </interface>
	I0127 11:38:03.603048   66618 main.go:141] libmachine: (old-k8s-version-570778)     <interface type='network'>
	I0127 11:38:03.603059   66618 main.go:141] libmachine: (old-k8s-version-570778)       <source network='default'/>
	I0127 11:38:03.603072   66618 main.go:141] libmachine: (old-k8s-version-570778)       <model type='virtio'/>
	I0127 11:38:03.603082   66618 main.go:141] libmachine: (old-k8s-version-570778)     </interface>
	I0127 11:38:03.603092   66618 main.go:141] libmachine: (old-k8s-version-570778)     <serial type='pty'>
	I0127 11:38:03.603103   66618 main.go:141] libmachine: (old-k8s-version-570778)       <target port='0'/>
	I0127 11:38:03.603113   66618 main.go:141] libmachine: (old-k8s-version-570778)     </serial>
	I0127 11:38:03.603123   66618 main.go:141] libmachine: (old-k8s-version-570778)     <console type='pty'>
	I0127 11:38:03.603134   66618 main.go:141] libmachine: (old-k8s-version-570778)       <target type='serial' port='0'/>
	I0127 11:38:03.603150   66618 main.go:141] libmachine: (old-k8s-version-570778)     </console>
	I0127 11:38:03.603166   66618 main.go:141] libmachine: (old-k8s-version-570778)     <rng model='virtio'>
	I0127 11:38:03.603180   66618 main.go:141] libmachine: (old-k8s-version-570778)       <backend model='random'>/dev/random</backend>
	I0127 11:38:03.603192   66618 main.go:141] libmachine: (old-k8s-version-570778)     </rng>
	I0127 11:38:03.603203   66618 main.go:141] libmachine: (old-k8s-version-570778)     
	I0127 11:38:03.603211   66618 main.go:141] libmachine: (old-k8s-version-570778)     
	I0127 11:38:03.603221   66618 main.go:141] libmachine: (old-k8s-version-570778)   </devices>
	I0127 11:38:03.603235   66618 main.go:141] libmachine: (old-k8s-version-570778) </domain>
	I0127 11:38:03.603246   66618 main.go:141] libmachine: (old-k8s-version-570778) 
	I0127 11:38:03.607776   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:3b:8c:b3 in network default
	I0127 11:38:03.608517   66618 main.go:141] libmachine: (old-k8s-version-570778) starting domain...
	I0127 11:38:03.608542   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:03.608551   66618 main.go:141] libmachine: (old-k8s-version-570778) ensuring networks are active...
	I0127 11:38:03.609250   66618 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network default is active
	I0127 11:38:03.609535   66618 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network mk-old-k8s-version-570778 is active
	I0127 11:38:03.609983   66618 main.go:141] libmachine: (old-k8s-version-570778) getting domain XML...
	I0127 11:38:03.610622   66618 main.go:141] libmachine: (old-k8s-version-570778) creating domain...
	I0127 11:38:04.942059   66618 main.go:141] libmachine: (old-k8s-version-570778) waiting for IP...
	I0127 11:38:04.943203   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:04.943751   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:04.943775   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:04.943658   66914 retry.go:31] will retry after 282.561949ms: waiting for domain to come up
	I0127 11:38:05.228336   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:05.228870   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:05.228894   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:05.228849   66914 retry.go:31] will retry after 259.528148ms: waiting for domain to come up
	I0127 11:38:05.490594   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:05.491229   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:05.491268   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:05.491224   66914 retry.go:31] will retry after 466.586665ms: waiting for domain to come up
	I0127 11:38:05.959916   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:05.960731   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:05.960961   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:05.960859   66914 retry.go:31] will retry after 586.984746ms: waiting for domain to come up
	I0127 11:38:06.549842   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:06.550346   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:06.550371   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:06.550314   66914 retry.go:31] will retry after 758.255338ms: waiting for domain to come up
	I0127 11:38:07.310054   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:07.310708   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:07.310756   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:07.310684   66914 retry.go:31] will retry after 576.302639ms: waiting for domain to come up
	I0127 11:38:07.888465   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:07.889043   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:07.889073   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:07.889014   66914 retry.go:31] will retry after 1.184547089s: waiting for domain to come up
	I0127 11:38:09.075439   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:09.075976   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:09.076014   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:09.075960   66914 retry.go:31] will retry after 1.315512805s: waiting for domain to come up
	I0127 11:38:10.393279   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:10.393790   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:10.393821   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:10.393748   66914 retry.go:31] will retry after 1.738470256s: waiting for domain to come up
	I0127 11:38:12.134551   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:12.134964   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:12.135022   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:12.134955   66914 retry.go:31] will retry after 1.452756423s: waiting for domain to come up
	I0127 11:38:13.590074   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:13.590626   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:13.590655   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:13.590588   66914 retry.go:31] will retry after 2.629020333s: waiting for domain to come up
	I0127 11:38:16.220958   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:16.221479   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:16.221511   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:16.221447   66914 retry.go:31] will retry after 3.017388038s: waiting for domain to come up
	I0127 11:38:19.240692   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:19.241135   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:19.241200   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:19.241137   66914 retry.go:31] will retry after 2.998081825s: waiting for domain to come up
	I0127 11:38:22.243322   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:22.243822   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:38:22.243841   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:38:22.243796   66914 retry.go:31] will retry after 4.355751083s: waiting for domain to come up
	I0127 11:38:26.603443   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.604085   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has current primary IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.604118   66618 main.go:141] libmachine: (old-k8s-version-570778) found domain IP: 192.168.50.193
	I0127 11:38:26.604132   66618 main.go:141] libmachine: (old-k8s-version-570778) reserving static IP address...
	I0127 11:38:26.604609   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"} in network mk-old-k8s-version-570778
	I0127 11:38:26.677340   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | Getting to WaitForSSH function...
	I0127 11:38:26.677380   66618 main.go:141] libmachine: (old-k8s-version-570778) reserved static IP address 192.168.50.193 for domain old-k8s-version-570778
	I0127 11:38:26.677394   66618 main.go:141] libmachine: (old-k8s-version-570778) waiting for SSH...
	I0127 11:38:26.680536   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.680977   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:26.681015   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.681089   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH client type: external
	I0127 11:38:26.681133   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa (-rw-------)
	I0127 11:38:26.681167   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:38:26.681182   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | About to run SSH command:
	I0127 11:38:26.681197   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | exit 0
	I0127 11:38:26.807399   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | SSH cmd err, output: <nil>: 
	I0127 11:38:26.807633   66618 main.go:141] libmachine: (old-k8s-version-570778) KVM machine creation complete
	I0127 11:38:26.807968   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:38:26.808535   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:26.808735   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:26.808856   66618 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 11:38:26.808871   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetState
	I0127 11:38:26.810162   66618 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 11:38:26.810176   66618 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 11:38:26.810181   66618 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 11:38:26.810186   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:26.812564   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.812953   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:26.812982   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.813130   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:26.813285   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:26.813445   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:26.813593   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:26.813719   66618 main.go:141] libmachine: Using SSH client type: native
	I0127 11:38:26.813902   66618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:38:26.813915   66618 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 11:38:26.918805   66618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:38:26.918826   66618 main.go:141] libmachine: Detecting the provisioner...
	I0127 11:38:26.918833   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:26.921474   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.921902   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:26.921922   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:26.922054   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:26.922209   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:26.922384   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:26.922495   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:26.922654   66618 main.go:141] libmachine: Using SSH client type: native
	I0127 11:38:26.922819   66618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:38:26.922832   66618 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 11:38:27.028364   66618 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 11:38:27.028471   66618 main.go:141] libmachine: found compatible host: buildroot
	I0127 11:38:27.028487   66618 main.go:141] libmachine: Provisioning with buildroot...
	I0127 11:38:27.028503   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:38:27.028781   66618 buildroot.go:166] provisioning hostname "old-k8s-version-570778"
	I0127 11:38:27.028806   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:38:27.029021   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.031835   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.032210   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.032238   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.032454   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.032626   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.032771   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.032942   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.033139   66618 main.go:141] libmachine: Using SSH client type: native
	I0127 11:38:27.033338   66618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:38:27.033351   66618 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570778 && echo "old-k8s-version-570778" | sudo tee /etc/hostname
	I0127 11:38:27.152837   66618 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570778
	
	I0127 11:38:27.152871   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.155982   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.156336   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.156367   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.156545   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.156758   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.156944   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.157106   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.157287   66618 main.go:141] libmachine: Using SSH client type: native
	I0127 11:38:27.157454   66618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:38:27.157469   66618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570778/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:38:27.271979   66618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:38:27.272002   66618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:38:27.272037   66618 buildroot.go:174] setting up certificates
	I0127 11:38:27.272051   66618 provision.go:84] configureAuth start
	I0127 11:38:27.272064   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:38:27.272369   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:38:27.275183   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.275565   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.275593   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.275785   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.278185   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.278619   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.278648   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.278856   66618 provision.go:143] copyHostCerts
	I0127 11:38:27.278935   66618 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:38:27.278958   66618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:38:27.279030   66618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:38:27.279149   66618 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:38:27.279163   66618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:38:27.279196   66618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:38:27.279280   66618 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:38:27.279290   66618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:38:27.279323   66618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:38:27.279421   66618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570778 san=[127.0.0.1 192.168.50.193 localhost minikube old-k8s-version-570778]
	I0127 11:38:27.361552   66618 provision.go:177] copyRemoteCerts
	I0127 11:38:27.361607   66618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:38:27.361629   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.364404   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.364748   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.364800   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.364937   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.365125   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.365280   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.365427   66618 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:38:27.445466   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:38:27.468244   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:38:27.490398   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:38:27.512476   66618 provision.go:87] duration metric: took 240.407655ms to configureAuth
	I0127 11:38:27.512505   66618 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:38:27.512653   66618 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:38:27.512716   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.515477   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.515865   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.515894   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.516157   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.516456   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.516642   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.516806   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.516988   66618 main.go:141] libmachine: Using SSH client type: native
	I0127 11:38:27.517173   66618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:38:27.517194   66618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:38:27.748583   66618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:38:27.748611   66618 main.go:141] libmachine: Checking connection to Docker...
	I0127 11:38:27.748621   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetURL
	I0127 11:38:27.749953   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | using libvirt version 6000000
	I0127 11:38:27.752412   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.752826   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.752871   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.753059   66618 main.go:141] libmachine: Docker is up and running!
	I0127 11:38:27.753070   66618 main.go:141] libmachine: Reticulating splines...
	I0127 11:38:27.753076   66618 client.go:171] duration metric: took 24.751932809s to LocalClient.Create
	I0127 11:38:27.753100   66618 start.go:167] duration metric: took 24.75201255s to libmachine.API.Create "old-k8s-version-570778"
	I0127 11:38:27.753115   66618 start.go:293] postStartSetup for "old-k8s-version-570778" (driver="kvm2")
	I0127 11:38:27.753126   66618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:38:27.753147   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:27.753369   66618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:38:27.753398   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.755625   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.756001   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.756035   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.756141   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.756352   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.756545   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.756687   66618 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:38:27.837543   66618 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:38:27.841456   66618 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:38:27.841477   66618 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:38:27.841537   66618 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:38:27.841606   66618 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:38:27.841688   66618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:38:27.850955   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:38:27.873059   66618 start.go:296] duration metric: took 119.931308ms for postStartSetup
	I0127 11:38:27.873098   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:38:27.873702   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:38:27.876375   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.876769   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.876800   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.877047   66618 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:38:27.877222   66618 start.go:128] duration metric: took 24.896595223s to createHost
	I0127 11:38:27.877242   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.879776   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.880132   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.880166   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.880298   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.880476   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.880653   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.880811   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.880962   66618 main.go:141] libmachine: Using SSH client type: native
	I0127 11:38:27.881120   66618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:38:27.881135   66618 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:38:27.987894   66618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737977907.961458772
	
	I0127 11:38:27.987919   66618 fix.go:216] guest clock: 1737977907.961458772
	I0127 11:38:27.987926   66618 fix.go:229] Guest: 2025-01-27 11:38:27.961458772 +0000 UTC Remote: 2025-01-27 11:38:27.877231928 +0000 UTC m=+47.279373079 (delta=84.226844ms)
	I0127 11:38:27.987945   66618 fix.go:200] guest clock delta is within tolerance: 84.226844ms
	I0127 11:38:27.987950   66618 start.go:83] releasing machines lock for "old-k8s-version-570778", held for 25.007510015s
	I0127 11:38:27.987975   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:27.988259   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:38:27.991077   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.991641   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.991671   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.991896   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:27.992507   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:27.992700   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:38:27.992793   66618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:38:27.992848   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.992898   66618 ssh_runner.go:195] Run: cat /version.json
	I0127 11:38:27.992922   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:38:27.995816   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.996162   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.996192   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.996232   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.996364   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.996538   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.996677   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.996714   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:27.996740   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:27.996797   66618 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:38:27.996987   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:38:27.997157   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:38:27.997301   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:38:27.997418   66618 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:38:28.081376   66618 ssh_runner.go:195] Run: systemctl --version
	I0127 11:38:28.106825   66618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:38:28.273866   66618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:38:28.279893   66618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:38:28.279958   66618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:38:28.297893   66618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:38:28.297919   66618 start.go:495] detecting cgroup driver to use...
	I0127 11:38:28.297975   66618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:38:28.315908   66618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:38:28.332033   66618 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:38:28.332102   66618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:38:28.348215   66618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:38:28.363088   66618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:38:28.503395   66618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:38:28.666095   66618 docker.go:233] disabling docker service ...
	I0127 11:38:28.666172   66618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:38:28.682182   66618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:38:28.694810   66618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:38:28.832671   66618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:38:28.947733   66618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:38:28.961783   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:38:28.980946   66618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 11:38:28.980999   66618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:38:28.991022   66618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:38:28.991081   66618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:38:29.001179   66618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:38:29.014048   66618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:38:29.024472   66618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:38:29.037585   66618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:38:29.046718   66618 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:38:29.046780   66618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:38:29.059631   66618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:38:29.071726   66618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:38:29.197344   66618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:38:29.296950   66618 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:38:29.297013   66618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:38:29.301813   66618 start.go:563] Will wait 60s for crictl version
	I0127 11:38:29.301866   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:29.305664   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:38:29.341331   66618 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:38:29.341420   66618 ssh_runner.go:195] Run: crio --version
	I0127 11:38:29.368520   66618 ssh_runner.go:195] Run: crio --version
	I0127 11:38:29.398777   66618 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 11:38:29.400187   66618 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:38:29.403363   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:29.403915   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:38:17 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:38:29.403950   66618 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:38:29.404137   66618 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:38:29.408206   66618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:38:29.422858   66618 kubeadm.go:883] updating cluster {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:38:29.422979   66618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:38:29.423046   66618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:38:29.462144   66618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:38:29.462217   66618 ssh_runner.go:195] Run: which lz4
	I0127 11:38:29.466223   66618 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:38:29.470467   66618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:38:29.470491   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:38:30.909989   66618 crio.go:462] duration metric: took 1.443786337s to copy over tarball
	I0127 11:38:30.910082   66618 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:38:33.520394   66618 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.61027457s)
	I0127 11:38:33.520427   66618 crio.go:469] duration metric: took 2.61040072s to extract the tarball
	I0127 11:38:33.520437   66618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:38:33.563195   66618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:38:33.606972   66618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:38:33.606999   66618 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:38:33.607069   66618 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:33.607126   66618 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:38:33.607135   66618 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:33.607079   66618 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:38:33.607088   66618 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:33.607119   66618 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:33.607102   66618 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:33.607532   66618 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:38:33.608926   66618 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:38:33.608999   66618 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:33.609023   66618 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:33.609054   66618 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:38:33.609086   66618 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:33.608926   66618 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:38:33.608930   66618 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:33.609102   66618 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:33.751683   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:33.754366   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:33.758667   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:33.759696   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:38:33.760972   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:33.764606   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:33.799803   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:38:33.886339   66618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:38:33.886397   66618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:33.886446   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:33.900852   66618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:38:33.900899   66618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:33.900945   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:33.912762   66618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:38:33.912808   66618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:33.912853   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:33.915439   66618 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:38:33.915475   66618 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:38:33.915514   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:33.915530   66618 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:38:33.915562   66618 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:33.915574   66618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:38:33.915599   66618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:33.915623   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:33.915655   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:33.938580   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:33.938620   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:33.938650   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:33.938662   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:33.938675   66618 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:38:33.938689   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:38:33.938693   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:33.938712   66618 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:38:33.938743   66618 ssh_runner.go:195] Run: which crictl
	I0127 11:38:34.061532   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:34.069502   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:34.069599   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:38:34.069689   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:34.069772   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:38:34.069885   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:34.089088   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:34.169639   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:38:34.206483   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:38:34.206543   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:38:34.229629   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:38:34.229698   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:38:34.229655   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:38:34.268119   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:38:34.298928   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:38:34.326869   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:38:34.326952   66618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:38:34.357964   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:38:34.364726   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:38:34.364747   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:38:34.384193   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:38:34.392144   66618 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:38:34.634933   66618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:38:34.781918   66618 cache_images.go:92] duration metric: took 1.174898632s to LoadCachedImages
	W0127 11:38:34.782021   66618 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0127 11:38:34.782036   66618 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.20.0 crio true true} ...
	I0127 11:38:34.782171   66618 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570778 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:38:34.782256   66618 ssh_runner.go:195] Run: crio config
	I0127 11:38:34.827713   66618 cni.go:84] Creating CNI manager for ""
	I0127 11:38:34.827741   66618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:38:34.827753   66618 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:38:34.827779   66618 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570778 NodeName:old-k8s-version-570778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:38:34.827946   66618 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570778"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:38:34.828025   66618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:38:34.844382   66618 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:38:34.844454   66618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:38:34.853728   66618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 11:38:34.871092   66618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:38:34.888038   66618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:38:34.903849   66618 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 11:38:34.908402   66618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:38:34.920447   66618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:38:35.039786   66618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:38:35.058051   66618 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778 for IP: 192.168.50.193
	I0127 11:38:35.058079   66618 certs.go:194] generating shared ca certs ...
	I0127 11:38:35.058101   66618 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.058276   66618 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:38:35.058391   66618 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:38:35.058408   66618 certs.go:256] generating profile certs ...
	I0127 11:38:35.058476   66618 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key
	I0127 11:38:35.058500   66618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.crt with IP's: []
	I0127 11:38:35.199751   66618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.crt ...
	I0127 11:38:35.199789   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.crt: {Name:mk582486663219c6dfe142f6373fc6f9f80df2f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.199987   66618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key ...
	I0127 11:38:35.200007   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key: {Name:mk557c3c6758c0b5a5909995dce5456716801fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.200088   66618 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f
	I0127 11:38:35.200104   66618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt.1541225f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.193]
	I0127 11:38:35.393500   66618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt.1541225f ...
	I0127 11:38:35.393531   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt.1541225f: {Name:mk511c7c79f894854fad379c0ec4a9f118a9809b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.393722   66618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f ...
	I0127 11:38:35.393746   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f: {Name:mkc4fd91ba23f77ea1995240c452c3646cf53b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.393865   66618 certs.go:381] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt.1541225f -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt
	I0127 11:38:35.393952   66618 certs.go:385] copying /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f -> /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key
	I0127 11:38:35.394002   66618 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key
	I0127 11:38:35.394016   66618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt with IP's: []
	I0127 11:38:35.511380   66618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt ...
	I0127 11:38:35.511424   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt: {Name:mkb4534ab689c2a3bc608cd3ab7fcc969705f547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.511675   66618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key ...
	I0127 11:38:35.511696   66618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key: {Name:mk31ffc61f00e1fd4e2c63eda0c649679c40002c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:38:35.511949   66618 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:38:35.512003   66618 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:38:35.512015   66618 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:38:35.512049   66618 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:38:35.512084   66618 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:38:35.512116   66618 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:38:35.512170   66618 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:38:35.512849   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:38:35.539212   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:38:35.563506   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:38:35.588853   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:38:35.612888   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:38:35.636882   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:38:35.659527   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:38:35.683639   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:38:35.708931   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:38:35.732514   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:38:35.758244   66618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:38:35.780662   66618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:38:35.796643   66618 ssh_runner.go:195] Run: openssl version
	I0127 11:38:35.802552   66618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:38:35.813513   66618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:38:35.818379   66618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:38:35.818447   66618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:38:35.823987   66618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:38:35.835489   66618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:38:35.846279   66618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:38:35.850773   66618 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:38:35.850817   66618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:38:35.856458   66618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:38:35.869792   66618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:38:35.896310   66618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:38:35.901320   66618 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:38:35.901421   66618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:38:35.909897   66618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:38:35.925504   66618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:38:35.933005   66618 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:38:35.933061   66618 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:38:35.933124   66618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:38:35.933168   66618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:38:35.980100   66618 cri.go:89] found id: ""
	I0127 11:38:35.980176   66618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:38:35.990316   66618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:38:35.999369   66618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:38:36.009325   66618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:38:36.009346   66618 kubeadm.go:157] found existing configuration files:
	
	I0127 11:38:36.009395   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:38:36.018364   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:38:36.018437   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:38:36.027497   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:38:36.036100   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:38:36.036159   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:38:36.045082   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:38:36.053624   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:38:36.053666   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:38:36.062403   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:38:36.070641   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:38:36.070694   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:38:36.079388   66618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:38:36.340239   66618 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:40:34.126951   66618 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:40:34.127038   66618 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:40:34.128792   66618 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:40:34.128867   66618 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:40:34.128956   66618 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:40:34.129069   66618 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:40:34.129193   66618 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:40:34.129267   66618 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:40:34.130937   66618 out.go:235]   - Generating certificates and keys ...
	I0127 11:40:34.131030   66618 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:40:34.131113   66618 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:40:34.131228   66618 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:40:34.131326   66618 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:40:34.131413   66618 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:40:34.131485   66618 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:40:34.131562   66618 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:40:34.131744   66618 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-570778] and IPs [192.168.50.193 127.0.0.1 ::1]
	I0127 11:40:34.131821   66618 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:40:34.131978   66618 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-570778] and IPs [192.168.50.193 127.0.0.1 ::1]
	I0127 11:40:34.132050   66618 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:40:34.132132   66618 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:40:34.132223   66618 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:40:34.132313   66618 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:40:34.132387   66618 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:40:34.132467   66618 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:40:34.132575   66618 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:40:34.132679   66618 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:40:34.132797   66618 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:40:34.132895   66618 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:40:34.132938   66618 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:40:34.133037   66618 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:40:34.134535   66618 out.go:235]   - Booting up control plane ...
	I0127 11:40:34.134662   66618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:40:34.134782   66618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:40:34.134941   66618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:40:34.135028   66618 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:40:34.135149   66618 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:40:34.135189   66618 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:40:34.135256   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:40:34.135551   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:40:34.135704   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:40:34.135951   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:40:34.136077   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:40:34.136323   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:40:34.136417   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:40:34.136683   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:40:34.136816   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:40:34.137088   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:40:34.137108   66618 kubeadm.go:310] 
	I0127 11:40:34.137168   66618 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:40:34.137241   66618 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:40:34.137261   66618 kubeadm.go:310] 
	I0127 11:40:34.137313   66618 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:40:34.137355   66618 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:40:34.137518   66618 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:40:34.137530   66618 kubeadm.go:310] 
	I0127 11:40:34.137659   66618 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:40:34.137714   66618 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:40:34.137769   66618 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:40:34.137782   66618 kubeadm.go:310] 
	I0127 11:40:34.137935   66618 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:40:34.138043   66618 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:40:34.138054   66618 kubeadm.go:310] 
	I0127 11:40:34.138180   66618 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:40:34.138321   66618 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:40:34.138440   66618 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:40:34.138586   66618 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:40:34.138637   66618 kubeadm.go:310] 
	W0127 11:40:34.138745   66618 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-570778] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-570778] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-570778] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-570778] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 11:40:34.138788   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:40:34.607135   66618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:40:34.626917   66618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:40:34.639727   66618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:40:34.639753   66618 kubeadm.go:157] found existing configuration files:
	
	I0127 11:40:34.639806   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:40:34.651101   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:40:34.651183   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:40:34.660512   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:40:34.668791   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:40:34.668857   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:40:34.677529   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:40:34.687243   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:40:34.687323   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:40:34.695916   66618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:40:34.704175   66618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:40:34.704233   66618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:40:34.713056   66618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:40:34.785812   66618 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:40:34.785916   66618 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:40:34.936075   66618 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:40:34.936222   66618 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:40:34.936368   66618 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:40:35.127190   66618 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:40:35.216365   66618 out.go:235]   - Generating certificates and keys ...
	I0127 11:40:35.216492   66618 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:40:35.216572   66618 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:40:35.216665   66618 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:40:35.216752   66618 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:40:35.216890   66618 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:40:35.216973   66618 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:40:35.217053   66618 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:40:35.217147   66618 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:40:35.217249   66618 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:40:35.217346   66618 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:40:35.217397   66618 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:40:35.217473   66618 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:40:35.459110   66618 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:40:35.866498   66618 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:40:35.943163   66618 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:40:36.155548   66618 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:40:36.177145   66618 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:40:36.177913   66618 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:40:36.177959   66618 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:40:36.311407   66618 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:40:36.425687   66618 out.go:235]   - Booting up control plane ...
	I0127 11:40:36.425839   66618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:40:36.425963   66618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:40:36.426038   66618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:40:36.426105   66618 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:40:36.426243   66618 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:41:16.334099   66618 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:41:16.334196   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:41:16.334401   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:41:21.334869   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:41:21.335073   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:41:31.335733   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:41:31.335910   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:41:51.334931   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:41:51.335197   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:42:31.334411   66618 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:42:31.334628   66618 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:42:31.334639   66618 kubeadm.go:310] 
	I0127 11:42:31.334684   66618 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:42:31.334766   66618 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:42:31.334797   66618 kubeadm.go:310] 
	I0127 11:42:31.334851   66618 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:42:31.334906   66618 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:42:31.335068   66618 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:42:31.335083   66618 kubeadm.go:310] 
	I0127 11:42:31.335212   66618 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:42:31.335266   66618 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:42:31.335306   66618 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:42:31.335319   66618 kubeadm.go:310] 
	I0127 11:42:31.335459   66618 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:42:31.335598   66618 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:42:31.335626   66618 kubeadm.go:310] 
	I0127 11:42:31.335790   66618 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:42:31.335916   66618 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:42:31.336034   66618 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:42:31.336157   66618 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:42:31.336169   66618 kubeadm.go:310] 
	I0127 11:42:31.336532   66618 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:42:31.336654   66618 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:42:31.336744   66618 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:42:31.336811   66618 kubeadm.go:394] duration metric: took 3m55.403751579s to StartCluster
	I0127 11:42:31.336864   66618 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:42:31.336915   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:42:31.371730   66618 cri.go:89] found id: ""
	I0127 11:42:31.371749   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.371757   66618 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:42:31.371765   66618 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:42:31.371819   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:42:31.412650   66618 cri.go:89] found id: ""
	I0127 11:42:31.412676   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.412695   66618 logs.go:284] No container was found matching "etcd"
	I0127 11:42:31.412702   66618 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:42:31.412758   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:42:31.452358   66618 cri.go:89] found id: ""
	I0127 11:42:31.452381   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.452402   66618 logs.go:284] No container was found matching "coredns"
	I0127 11:42:31.452409   66618 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:42:31.452462   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:42:31.487405   66618 cri.go:89] found id: ""
	I0127 11:42:31.487433   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.487444   66618 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:42:31.487451   66618 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:42:31.487512   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:42:31.524256   66618 cri.go:89] found id: ""
	I0127 11:42:31.524288   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.524299   66618 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:42:31.524312   66618 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:42:31.524371   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:42:31.562304   66618 cri.go:89] found id: ""
	I0127 11:42:31.562333   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.562345   66618 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:42:31.562353   66618 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:42:31.562413   66618 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:42:31.594211   66618 cri.go:89] found id: ""
	I0127 11:42:31.594236   66618 logs.go:282] 0 containers: []
	W0127 11:42:31.594244   66618 logs.go:284] No container was found matching "kindnet"
	I0127 11:42:31.594254   66618 logs.go:123] Gathering logs for container status ...
	I0127 11:42:31.594264   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:42:31.649936   66618 logs.go:123] Gathering logs for kubelet ...
	I0127 11:42:31.649974   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:42:31.702475   66618 logs.go:123] Gathering logs for dmesg ...
	I0127 11:42:31.702508   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:42:31.722562   66618 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:42:31.722590   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:42:31.835146   66618 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:42:31.835166   66618 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:42:31.835180   66618 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 11:42:31.945685   66618 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 11:42:31.945741   66618 out.go:270] * 
	* 
	W0127 11:42:31.945794   66618 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:42:31.945809   66618 out.go:270] * 
	* 
	W0127 11:42:31.946604   66618 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 11:42:31.949683   66618 out.go:201] 
	W0127 11:42:31.951229   66618 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:42:31.951259   66618 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 11:42:31.951277   66618 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 11:42:31.952727   66618 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-570778 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 6 (228.466535ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 11:42:32.219375   69912 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (291.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1558.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-273200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-273200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (25m56.46087095s)

                                                
                                                
-- stdout --
	* [no-preload-273200] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-273200" primary control-plane node in "no-preload-273200" cluster
	* Restarting existing kvm2 VM for "no-preload-273200" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-273200 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:41:48.270208   69396 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:41:48.270298   69396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:41:48.270302   69396 out.go:358] Setting ErrFile to fd 2...
	I0127 11:41:48.270307   69396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:41:48.270465   69396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:41:48.270992   69396 out.go:352] Setting JSON to false
	I0127 11:41:48.271937   69396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8608,"bootTime":1737969500,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:41:48.272035   69396 start.go:139] virtualization: kvm guest
	I0127 11:41:48.274172   69396 out.go:177] * [no-preload-273200] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:41:48.275438   69396 notify.go:220] Checking for updates...
	I0127 11:41:48.275454   69396 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:41:48.276806   69396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:41:48.278167   69396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:41:48.279332   69396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:41:48.280625   69396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:41:48.281887   69396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:41:48.283487   69396 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:41:48.283891   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:41:48.283950   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:41:48.298753   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0127 11:41:48.299115   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:41:48.299819   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:41:48.299846   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:41:48.300130   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:41:48.300316   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:41:48.300533   69396 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:41:48.300850   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:41:48.300900   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:41:48.314952   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0127 11:41:48.315349   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:41:48.315757   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:41:48.315782   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:41:48.316077   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:41:48.316252   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:41:48.351524   69396 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:41:48.352668   69396 start.go:297] selected driver: kvm2
	I0127 11:41:48.352679   69396 start.go:901] validating driver "kvm2" against &{Name:no-preload-273200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-273200 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:41:48.352802   69396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:41:48.353527   69396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.353600   69396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:41:48.367619   69396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:41:48.368006   69396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:41:48.368035   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:41:48.368066   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:41:48.368101   69396 start.go:340] cluster config:
	{Name:no-preload-273200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-273200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:41:48.368192   69396 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.369822   69396 out.go:177] * Starting "no-preload-273200" primary control-plane node in "no-preload-273200" cluster
	I0127 11:41:48.371086   69396 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:41:48.371207   69396 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/config.json ...
	I0127 11:41:48.371346   69396 cache.go:107] acquiring lock: {Name:mkf32aa676040e80d6358c9ce72feb6288224505 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371375   69396 cache.go:107] acquiring lock: {Name:mk201e7b6da5afb247bd549b38adc2281751378e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371397   69396 cache.go:107] acquiring lock: {Name:mk683003b9708148b38141574c7f726f42817342 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371434   69396 cache.go:107] acquiring lock: {Name:mk6a9fe86889d99b5303a0589cfb3fb092b69280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371448   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 11:41:48.371459   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 11:41:48.371476   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 11:41:48.371481   69396 cache.go:107] acquiring lock: {Name:mk572a6cc8a94bbe239e517962328f51d1e0f787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371493   69396 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 128.908µs
	I0127 11:41:48.371445   69396 start.go:360] acquireMachinesLock for no-preload-273200: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:41:48.371528   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 11:41:48.371535   69396 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 57.012µs
	I0127 11:41:48.371548   69396 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 11:41:48.371480   69396 cache.go:107] acquiring lock: {Name:mkbb00bb49331a7562b7666d27675532ea03e6c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371465   69396 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 135.651µs
	I0127 11:41:48.371674   69396 start.go:364] duration metric: took 149.313µs to acquireMachinesLock for "no-preload-273200"
	I0127 11:41:48.371679   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 11:41:48.371680   69396 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 11:41:48.371690   69396 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 286.714µs
	I0127 11:41:48.371493   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 11:41:48.371698   69396 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:41:48.371701   69396 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 11:41:48.371706   69396 fix.go:54] fixHost starting: 
	I0127 11:41:48.371705   69396 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 273.995µs
	I0127 11:41:48.371714   69396 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 11:41:48.371478   69396 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 81.849µs
	I0127 11:41:48.371721   69396 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 11:41:48.371351   69396 cache.go:107] acquiring lock: {Name:mk8285418fb5d9888869a9476e1a3d898a48757d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371498   69396 cache.go:107] acquiring lock: {Name:mk8cc96046e9867d6f34e5ff96e2bd33563f1b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:41:48.371754   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 11:41:48.371759   69396 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 423.31µs
	I0127 11:41:48.371764   69396 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 11:41:48.371514   69396 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 11:41:48.371780   69396 cache.go:115] /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 11:41:48.371790   69396 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 293.863µs
	I0127 11:41:48.371799   69396 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 11:41:48.371813   69396 cache.go:87] Successfully saved all images to host disk.
	I0127 11:41:48.372117   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:41:48.372158   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:41:48.386199   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0127 11:41:48.387304   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:41:48.388073   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:41:48.388097   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:41:48.388406   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:41:48.388665   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:41:48.388814   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:41:48.390307   69396 fix.go:112] recreateIfNeeded on no-preload-273200: state=Stopped err=<nil>
	I0127 11:41:48.390324   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	W0127 11:41:48.390453   69396 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:41:48.392245   69396 out.go:177] * Restarting existing kvm2 VM for "no-preload-273200" ...
	I0127 11:41:48.393641   69396 main.go:141] libmachine: (no-preload-273200) Calling .Start
	I0127 11:41:48.393838   69396 main.go:141] libmachine: (no-preload-273200) starting domain...
	I0127 11:41:48.393857   69396 main.go:141] libmachine: (no-preload-273200) ensuring networks are active...
	I0127 11:41:48.394667   69396 main.go:141] libmachine: (no-preload-273200) Ensuring network default is active
	I0127 11:41:48.395033   69396 main.go:141] libmachine: (no-preload-273200) Ensuring network mk-no-preload-273200 is active
	I0127 11:41:48.395409   69396 main.go:141] libmachine: (no-preload-273200) getting domain XML...
	I0127 11:41:48.396412   69396 main.go:141] libmachine: (no-preload-273200) creating domain...
	I0127 11:41:49.583236   69396 main.go:141] libmachine: (no-preload-273200) waiting for IP...
	I0127 11:41:49.583996   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:49.584478   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:49.584585   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:49.584487   69432 retry.go:31] will retry after 198.838748ms: waiting for domain to come up
	I0127 11:41:49.785168   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:49.785741   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:49.785762   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:49.785720   69432 retry.go:31] will retry after 365.21553ms: waiting for domain to come up
	I0127 11:41:50.152478   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:50.153032   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:50.153059   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:50.152999   69432 retry.go:31] will retry after 304.441113ms: waiting for domain to come up
	I0127 11:41:50.459376   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:50.459860   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:50.459894   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:50.459833   69432 retry.go:31] will retry after 479.524847ms: waiting for domain to come up
	I0127 11:41:50.940566   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:50.941037   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:50.941060   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:50.941003   69432 retry.go:31] will retry after 687.078309ms: waiting for domain to come up
	I0127 11:41:51.629786   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:51.630201   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:51.630226   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:51.630151   69432 retry.go:31] will retry after 621.692325ms: waiting for domain to come up
	I0127 11:41:52.253052   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:52.253501   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:52.253527   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:52.253481   69432 retry.go:31] will retry after 806.857792ms: waiting for domain to come up
	I0127 11:41:53.061685   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:53.062140   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:53.062169   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:53.062107   69432 retry.go:31] will retry after 1.466889313s: waiting for domain to come up
	I0127 11:41:54.531081   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:54.531582   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:54.531629   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:54.531554   69432 retry.go:31] will retry after 1.366099281s: waiting for domain to come up
	I0127 11:41:55.900003   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:55.900458   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:55.900510   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:55.900415   69432 retry.go:31] will retry after 2.251050968s: waiting for domain to come up
	I0127 11:41:58.154917   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:41:58.155399   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:41:58.155423   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:41:58.155370   69432 retry.go:31] will retry after 1.95565795s: waiting for domain to come up
	I0127 11:42:00.112586   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:00.113045   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:42:00.113101   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:42:00.113007   69432 retry.go:31] will retry after 2.419871263s: waiting for domain to come up
	I0127 11:42:02.535520   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:02.536034   69396 main.go:141] libmachine: (no-preload-273200) DBG | unable to find current IP address of domain no-preload-273200 in network mk-no-preload-273200
	I0127 11:42:02.536064   69396 main.go:141] libmachine: (no-preload-273200) DBG | I0127 11:42:02.535992   69432 retry.go:31] will retry after 3.877420564s: waiting for domain to come up
	I0127 11:42:06.417384   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.417841   69396 main.go:141] libmachine: (no-preload-273200) found domain IP: 192.168.61.181
	I0127 11:42:06.417858   69396 main.go:141] libmachine: (no-preload-273200) reserving static IP address...
	I0127 11:42:06.417868   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has current primary IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.418313   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "no-preload-273200", mac: "52:54:00:5b:91:77", ip: "192.168.61.181"} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.418337   69396 main.go:141] libmachine: (no-preload-273200) DBG | skip adding static IP to network mk-no-preload-273200 - found existing host DHCP lease matching {name: "no-preload-273200", mac: "52:54:00:5b:91:77", ip: "192.168.61.181"}
	I0127 11:42:06.418347   69396 main.go:141] libmachine: (no-preload-273200) reserved static IP address 192.168.61.181 for domain no-preload-273200
	I0127 11:42:06.418358   69396 main.go:141] libmachine: (no-preload-273200) waiting for SSH...
	I0127 11:42:06.418369   69396 main.go:141] libmachine: (no-preload-273200) DBG | Getting to WaitForSSH function...
	I0127 11:42:06.420169   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.420487   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.420510   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.420660   69396 main.go:141] libmachine: (no-preload-273200) DBG | Using SSH client type: external
	I0127 11:42:06.420678   69396 main.go:141] libmachine: (no-preload-273200) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa (-rw-------)
	I0127 11:42:06.420716   69396 main.go:141] libmachine: (no-preload-273200) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:42:06.420736   69396 main.go:141] libmachine: (no-preload-273200) DBG | About to run SSH command:
	I0127 11:42:06.420766   69396 main.go:141] libmachine: (no-preload-273200) DBG | exit 0
	I0127 11:42:06.543273   69396 main.go:141] libmachine: (no-preload-273200) DBG | SSH cmd err, output: <nil>: 
	I0127 11:42:06.543705   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetConfigRaw
	I0127 11:42:06.544355   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetIP
	I0127 11:42:06.546762   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.547130   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.547156   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.547414   69396 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/config.json ...
	I0127 11:42:06.547590   69396 machine.go:93] provisionDockerMachine start ...
	I0127 11:42:06.547639   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:42:06.547818   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:06.549975   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.550356   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.550386   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.550528   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:06.550697   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.550860   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.550970   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:06.551128   69396 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:06.551294   69396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0127 11:42:06.551304   69396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:42:06.651501   69396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:42:06.651527   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetMachineName
	I0127 11:42:06.651794   69396 buildroot.go:166] provisioning hostname "no-preload-273200"
	I0127 11:42:06.651817   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetMachineName
	I0127 11:42:06.652004   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:06.654361   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.654701   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.654737   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.654873   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:06.655043   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.655194   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.655410   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:06.655581   69396 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:06.655763   69396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0127 11:42:06.655783   69396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-273200 && echo "no-preload-273200" | sudo tee /etc/hostname
	I0127 11:42:06.768744   69396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-273200
	
	I0127 11:42:06.768776   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:06.771545   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.771919   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.771945   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.772177   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:06.772360   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.772590   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.772754   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:06.772973   69396 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:06.773206   69396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0127 11:42:06.773239   69396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-273200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-273200/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-273200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:42:06.880007   69396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:42:06.880042   69396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:42:06.880063   69396 buildroot.go:174] setting up certificates
	I0127 11:42:06.880074   69396 provision.go:84] configureAuth start
	I0127 11:42:06.880089   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetMachineName
	I0127 11:42:06.880358   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetIP
	I0127 11:42:06.882877   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.883145   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.883166   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.883271   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:06.885348   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.885641   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.885673   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.885791   69396 provision.go:143] copyHostCerts
	I0127 11:42:06.885840   69396 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:42:06.885857   69396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:42:06.885925   69396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:42:06.886008   69396 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:42:06.886018   69396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:42:06.886043   69396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:42:06.886091   69396 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:42:06.886099   69396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:42:06.886125   69396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:42:06.886171   69396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.no-preload-273200 san=[127.0.0.1 192.168.61.181 localhost minikube no-preload-273200]
	I0127 11:42:06.993349   69396 provision.go:177] copyRemoteCerts
	I0127 11:42:06.993404   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:42:06.993426   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:06.995978   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.996298   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:06.996335   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:06.996496   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:06.996640   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:06.996798   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:06.996969   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:42:07.081089   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:42:07.103133   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 11:42:07.125058   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:42:07.146295   69396 provision.go:87] duration metric: took 266.20884ms to configureAuth
	I0127 11:42:07.146316   69396 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:42:07.146516   69396 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:42:07.146668   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:07.149693   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.150088   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:07.150112   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.150281   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:07.150477   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.150659   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.150802   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:07.150972   69396 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:07.151133   69396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0127 11:42:07.151146   69396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:42:07.358399   69396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:42:07.358423   69396 machine.go:96] duration metric: took 810.793564ms to provisionDockerMachine
	I0127 11:42:07.358434   69396 start.go:293] postStartSetup for "no-preload-273200" (driver="kvm2")
	I0127 11:42:07.358444   69396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:42:07.358463   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:42:07.358739   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:42:07.358765   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:07.361419   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.361754   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:07.361785   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.361923   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:07.362087   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.362237   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:07.362395   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:42:07.440828   69396 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:42:07.444526   69396 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:42:07.444544   69396 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:42:07.444596   69396 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:42:07.444675   69396 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:42:07.444757   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:42:07.453025   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:42:07.474759   69396 start.go:296] duration metric: took 116.312446ms for postStartSetup
	I0127 11:42:07.474793   69396 fix.go:56] duration metric: took 19.103088149s for fixHost
	I0127 11:42:07.474812   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:07.477572   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.477957   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:07.477990   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.478140   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:07.478335   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.478487   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.478627   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:07.478773   69396 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:07.478933   69396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0127 11:42:07.478943   69396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:42:07.579733   69396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978127.554789164
	
	I0127 11:42:07.579756   69396 fix.go:216] guest clock: 1737978127.554789164
	I0127 11:42:07.579764   69396 fix.go:229] Guest: 2025-01-27 11:42:07.554789164 +0000 UTC Remote: 2025-01-27 11:42:07.474797247 +0000 UTC m=+19.240891366 (delta=79.991917ms)
	I0127 11:42:07.579782   69396 fix.go:200] guest clock delta is within tolerance: 79.991917ms
	I0127 11:42:07.579794   69396 start.go:83] releasing machines lock for "no-preload-273200", held for 19.208110997s
	I0127 11:42:07.579815   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:42:07.580069   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetIP
	I0127 11:42:07.583025   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.583442   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:07.583462   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.583643   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:42:07.584259   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:42:07.584449   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:42:07.584533   69396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:42:07.584605   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:07.584661   69396 ssh_runner.go:195] Run: cat /version.json
	I0127 11:42:07.584688   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:42:07.587217   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.587509   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.587568   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:07.587584   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.587782   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:07.587956   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.587993   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:07.588017   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:07.588103   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:07.588165   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:42:07.588228   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:42:07.588283   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:42:07.588389   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:42:07.588533   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:42:07.663877   69396 ssh_runner.go:195] Run: systemctl --version
	I0127 11:42:07.685635   69396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:42:07.825318   69396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:42:07.830936   69396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:42:07.831007   69396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:42:07.847561   69396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:42:07.847580   69396 start.go:495] detecting cgroup driver to use...
	I0127 11:42:07.847649   69396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:42:07.864287   69396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:42:07.878104   69396 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:42:07.878147   69396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:42:07.891105   69396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:42:07.904066   69396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:42:08.025561   69396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:42:08.172130   69396 docker.go:233] disabling docker service ...
	I0127 11:42:08.172207   69396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:42:08.185445   69396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:42:08.197062   69396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:42:08.310574   69396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:42:08.420345   69396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:42:08.433493   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:42:08.449865   69396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:42:08.449921   69396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.459366   69396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:42:08.459425   69396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.468979   69396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.478416   69396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.487991   69396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:42:08.497914   69396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.507474   69396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.522849   69396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:08.532386   69396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:42:08.541034   69396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:42:08.541083   69396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:42:08.552511   69396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:42:08.561943   69396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:42:08.671738   69396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:42:08.770569   69396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:42:08.770648   69396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:42:08.774875   69396 start.go:563] Will wait 60s for crictl version
	I0127 11:42:08.774920   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:08.778192   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:42:08.811069   69396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:42:08.811143   69396 ssh_runner.go:195] Run: crio --version
	I0127 11:42:08.837394   69396 ssh_runner.go:195] Run: crio --version
	I0127 11:42:08.864955   69396 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:42:08.866391   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetIP
	I0127 11:42:08.868870   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:08.869245   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:42:08.869272   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:42:08.869485   69396 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 11:42:08.873122   69396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:42:08.885696   69396 kubeadm.go:883] updating cluster {Name:no-preload-273200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-273200 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:42:08.885829   69396 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:42:08.885867   69396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:42:08.918410   69396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 11:42:08.918436   69396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.1 registry.k8s.io/kube-controller-manager:v1.32.1 registry.k8s.io/kube-scheduler:v1.32.1 registry.k8s.io/kube-proxy:v1.32.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:42:08.918500   69396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:08.918520   69396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:08.918518   69396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:08.918560   69396 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 11:42:08.918574   69396 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:08.918602   69396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:08.918560   69396 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:08.918536   69396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:08.920307   69396 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 11:42:08.920323   69396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:08.920360   69396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:08.920369   69396 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:08.920387   69396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:08.920393   69396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:08.920395   69396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:08.920310   69396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:09.068076   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:09.069072   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:09.072415   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:09.073224   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:09.092366   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0127 11:42:09.092642   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:09.098916   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:09.136685   69396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.1" does not exist at hash "95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a" in container runtime
	I0127 11:42:09.136727   69396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:09.136773   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:09.200585   69396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.1" does not exist at hash "2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1" in container runtime
	I0127 11:42:09.200627   69396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:09.200675   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:09.207683   69396 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0127 11:42:09.207728   69396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:09.207774   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:09.230251   69396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0127 11:42:09.230293   69396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:09.230331   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:09.338235   69396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.1" does not exist at hash "019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35" in container runtime
	I0127 11:42:09.338272   69396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.1" needs transfer: "registry.k8s.io/kube-proxy:v1.32.1" does not exist at hash "e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a" in container runtime
	I0127 11:42:09.338284   69396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:09.338302   69396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:09.338334   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:09.338341   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:09.338356   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:09.338399   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:09.338430   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:09.338463   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:09.426456   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:09.426456   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:09.426559   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:09.426584   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:09.426608   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:09.426653   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:09.537991   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 11:42:09.546692   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 11:42:09.560997   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:09.561016   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:09.561059   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 11:42:09.561080   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 11:42:09.604267   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 11:42:09.604372   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 11:42:09.633076   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 11:42:09.633174   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0127 11:42:09.671457   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 11:42:09.673007   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 11:42:09.673090   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 11:42:09.673141   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.1 (exists)
	I0127 11:42:09.673160   69396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 11:42:09.673185   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0127 11:42:09.673199   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 11:42:09.673214   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0127 11:42:09.673185   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 11:42:09.673290   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 11:42:09.722508   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 11:42:09.722622   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 11:42:09.734069   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 11:42:09.734186   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 11:42:09.887889   69396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:11.657402   69396 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0: (1.984190332s)
	I0127 11:42:11.657445   69396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1: (1.984216974s)
	I0127 11:42:11.657467   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 from cache
	I0127 11:42:11.657449   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0127 11:42:11.657484   69396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0127 11:42:11.657520   69396 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1: (1.984207025s)
	I0127 11:42:11.657543   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0127 11:42:11.657554   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.1 (exists)
	I0127 11:42:11.657564   69396 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1: (1.923363389s)
	I0127 11:42:11.657583   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.1 (exists)
	I0127 11:42:11.657545   69396 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.934908152s)
	I0127 11:42:11.657599   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.1 (exists)
	I0127 11:42:11.657605   69396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.769679692s)
	I0127 11:42:11.657637   69396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0127 11:42:11.657669   69396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:11.657712   69396 ssh_runner.go:195] Run: which crictl
	I0127 11:42:15.158318   69396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.50072844s)
	I0127 11:42:15.158340   69396 ssh_runner.go:235] Completed: which crictl: (3.500608533s)
	I0127 11:42:15.158350   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0127 11:42:15.158367   69396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0127 11:42:15.158400   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:15.158409   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0127 11:42:15.208537   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:16.931724   69396 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.723148799s)
	I0127 11:42:16.931804   69396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:42:16.931800   69396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.773369047s)
	I0127 11:42:16.931875   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0127 11:42:16.931885   69396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 11:42:16.931912   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 11:42:18.989146   69396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1: (2.057207983s)
	I0127 11:42:18.989180   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 from cache
	I0127 11:42:18.989184   69396 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.057363924s)
	I0127 11:42:18.989194   69396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 11:42:18.989217   69396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 11:42:18.989242   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 11:42:18.989295   69396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:42:21.056564   69396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (2.067291498s)
	I0127 11:42:21.056602   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0127 11:42:21.056612   69396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 11:42:21.056647   69396 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.067329703s)
	I0127 11:42:21.056660   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 11:42:21.056675   69396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0127 11:42:22.818938   69396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.762256794s)
	I0127 11:42:22.818967   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 from cache
	I0127 11:42:22.818995   69396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:42:22.819045   69396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:42:23.474907   69396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 11:42:23.474963   69396 cache_images.go:123] Successfully loaded all cached images
	I0127 11:42:23.474970   69396 cache_images.go:92] duration metric: took 14.556522448s to LoadCachedImages
	I0127 11:42:23.474983   69396 kubeadm.go:934] updating node { 192.168.61.181 8443 v1.32.1 crio true true} ...
	I0127 11:42:23.475096   69396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-273200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-273200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:42:23.475172   69396 ssh_runner.go:195] Run: crio config
	I0127 11:42:23.524555   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:42:23.524584   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:42:23.524595   69396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:42:23.524624   69396 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.181 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-273200 NodeName:no-preload-273200 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:42:23.524758   69396 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-273200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.181"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.181"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:42:23.524827   69396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:42:23.536358   69396 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:42:23.536438   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:42:23.546011   69396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 11:42:23.564605   69396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:42:23.583140   69396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0127 11:42:23.601551   69396 ssh_runner.go:195] Run: grep 192.168.61.181	control-plane.minikube.internal$ /etc/hosts
	I0127 11:42:23.605225   69396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:42:23.617226   69396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:42:23.740583   69396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:42:23.761662   69396 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200 for IP: 192.168.61.181
	I0127 11:42:23.761687   69396 certs.go:194] generating shared ca certs ...
	I0127 11:42:23.761708   69396 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:42:23.761878   69396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:42:23.761930   69396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:42:23.761945   69396 certs.go:256] generating profile certs ...
	I0127 11:42:23.762052   69396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.key
	I0127 11:42:23.762127   69396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key.47cca791
	I0127 11:42:23.762177   69396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.key
	I0127 11:42:23.762348   69396 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:42:23.762390   69396 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:42:23.762408   69396 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:42:23.762439   69396 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:42:23.762473   69396 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:42:23.762507   69396 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:42:23.762563   69396 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:42:23.763386   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:42:23.800987   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:42:23.843316   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:42:23.891640   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:42:23.917831   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 11:42:23.942432   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:42:23.965823   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:42:23.987460   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:42:24.012501   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:42:24.035087   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:42:24.057910   69396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:42:24.081415   69396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:42:24.098129   69396 ssh_runner.go:195] Run: openssl version
	I0127 11:42:24.103768   69396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:42:24.114355   69396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:42:24.118740   69396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:42:24.118805   69396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:42:24.124526   69396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:42:24.136529   69396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:42:24.147706   69396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:42:24.152129   69396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:42:24.152196   69396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:42:24.157950   69396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:42:24.169776   69396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:42:24.181598   69396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:24.185568   69396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:24.185624   69396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:24.191191   69396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:42:24.202558   69396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:42:24.206895   69396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:42:24.213290   69396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:42:24.219440   69396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:42:24.226835   69396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:42:24.234002   69396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:42:24.241192   69396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:42:24.248458   69396 kubeadm.go:392] StartCluster: {Name:no-preload-273200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-273200 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:24.248560   69396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:42:24.248616   69396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:42:24.290807   69396 cri.go:89] found id: ""
	I0127 11:42:24.290867   69396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:42:24.304119   69396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:42:24.304142   69396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:42:24.304197   69396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:42:24.316271   69396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:42:24.316861   69396 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-273200" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:42:24.317142   69396 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-273200" cluster setting kubeconfig missing "no-preload-273200" context setting]
	I0127 11:42:24.317721   69396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:42:24.319324   69396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:42:24.329829   69396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.181
	I0127 11:42:24.329862   69396 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:42:24.329875   69396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:42:24.329925   69396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:42:24.373730   69396 cri.go:89] found id: ""
	I0127 11:42:24.373807   69396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:42:24.392977   69396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:42:24.403669   69396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:42:24.403686   69396 kubeadm.go:157] found existing configuration files:
	
	I0127 11:42:24.403733   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:42:24.413758   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:42:24.413818   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:42:24.424912   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:42:24.435170   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:42:24.435236   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:42:24.444928   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:42:24.453823   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:42:24.453885   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:42:24.462975   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:42:24.471950   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:42:24.472000   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:42:24.480955   69396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:42:24.490099   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:24.594856   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:25.976019   69396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.381125041s)
	I0127 11:42:25.976064   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:26.178716   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:26.249739   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:26.332832   69396 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:42:26.332905   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:26.833281   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:27.333870   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:27.349917   69396 api_server.go:72] duration metric: took 1.017085778s to wait for apiserver process to appear ...
	I0127 11:42:27.349942   69396 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:42:27.349958   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:42:30.063014   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:42:30.063045   69396 api_server.go:103] status: https://192.168.61.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:42:30.063062   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:42:30.102613   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:42:30.102639   69396 api_server.go:103] status: https://192.168.61.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:42:30.350019   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:42:30.375074   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:30.375105   69396 api_server.go:103] status: https://192.168.61.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:30.850782   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:42:30.858489   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:30.858520   69396 api_server.go:103] status: https://192.168.61.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:31.350110   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:42:31.358080   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0127 11:42:31.368663   69396 api_server.go:141] control plane version: v1.32.1
	I0127 11:42:31.368690   69396 api_server.go:131] duration metric: took 4.018742549s to wait for apiserver health ...
	I0127 11:42:31.368699   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:42:31.368705   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:42:31.370569   69396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:42:31.371800   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:42:31.383212   69396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:42:31.426882   69396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:42:31.443295   69396 system_pods.go:59] 8 kube-system pods found
	I0127 11:42:31.443341   69396 system_pods.go:61] "coredns-668d6bf9bc-nw8jm" [273530f1-b5b6-46f0-871a-31451f6b1401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:42:31.443355   69396 system_pods.go:61] "etcd-no-preload-273200" [0e30984e-bd1e-49c0-abae-b9d609a9062f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 11:42:31.443367   69396 system_pods.go:61] "kube-apiserver-no-preload-273200" [1cbcca0d-129f-4841-82c3-ded5e2ea618d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 11:42:31.443383   69396 system_pods.go:61] "kube-controller-manager-no-preload-273200" [7e28524a-76d7-4803-ad79-4e642d9ecf56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 11:42:31.443391   69396 system_pods.go:61] "kube-proxy-b79m8" [f050eeb2-dc74-4171-8d96-6460ff58a02e] Running
	I0127 11:42:31.443403   69396 system_pods.go:61] "kube-scheduler-no-preload-273200" [fa844908-04e8-4928-96e1-134deb8fd039] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:42:31.443415   69396 system_pods.go:61] "metrics-server-f79f97bbb-75rzv" [78cf45c3-2336-404f-a84b-5513f6e10c0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:42:31.443425   69396 system_pods.go:61] "storage-provisioner" [e8c5beeb-9be6-4b90-bde0-1225896fa4fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 11:42:31.443441   69396 system_pods.go:74] duration metric: took 16.529749ms to wait for pod list to return data ...
	I0127 11:42:31.443454   69396 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:42:31.449262   69396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:42:31.449290   69396 node_conditions.go:123] node cpu capacity is 2
	I0127 11:42:31.449302   69396 node_conditions.go:105] duration metric: took 5.840888ms to run NodePressure ...
	I0127 11:42:31.449319   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:31.716285   69396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 11:42:31.720701   69396 kubeadm.go:739] kubelet initialised
	I0127 11:42:31.720728   69396 kubeadm.go:740] duration metric: took 4.402695ms waiting for restarted kubelet to initialise ...
	I0127 11:42:31.720738   69396 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:42:31.726079   69396 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-nw8jm" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:31.732476   69396 pod_ready.go:98] node "no-preload-273200" hosting pod "coredns-668d6bf9bc-nw8jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-273200" has status "Ready":"False"
	I0127 11:42:31.732503   69396 pod_ready.go:82] duration metric: took 6.401232ms for pod "coredns-668d6bf9bc-nw8jm" in "kube-system" namespace to be "Ready" ...
	E0127 11:42:31.732514   69396 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-273200" hosting pod "coredns-668d6bf9bc-nw8jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-273200" has status "Ready":"False"
	I0127 11:42:31.732529   69396 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:31.737457   69396 pod_ready.go:98] node "no-preload-273200" hosting pod "etcd-no-preload-273200" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-273200" has status "Ready":"False"
	I0127 11:42:31.737486   69396 pod_ready.go:82] duration metric: took 4.945171ms for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	E0127 11:42:31.737497   69396 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-273200" hosting pod "etcd-no-preload-273200" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-273200" has status "Ready":"False"
	I0127 11:42:31.737506   69396 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:31.742079   69396 pod_ready.go:98] node "no-preload-273200" hosting pod "kube-apiserver-no-preload-273200" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-273200" has status "Ready":"False"
	I0127 11:42:31.742103   69396 pod_ready.go:82] duration metric: took 4.580957ms for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	E0127 11:42:31.742114   69396 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-273200" hosting pod "kube-apiserver-no-preload-273200" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-273200" has status "Ready":"False"
	I0127 11:42:31.742123   69396 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:33.750727   69396 pod_ready.go:103] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:36.254793   69396 pod_ready.go:103] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:38.748371   69396 pod_ready.go:103] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:41.865778   69396 pod_ready.go:103] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:44.250876   69396 pod_ready.go:103] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:46.249605   69396 pod_ready.go:93] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:46.249633   69396 pod_ready.go:82] duration metric: took 14.507494963s for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:46.249648   69396 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b79m8" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:46.255174   69396 pod_ready.go:93] pod "kube-proxy-b79m8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:46.255200   69396 pod_ready.go:82] duration metric: took 5.542992ms for pod "kube-proxy-b79m8" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:46.255211   69396 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:46.259155   69396 pod_ready.go:93] pod "kube-scheduler-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:46.259179   69396 pod_ready.go:82] duration metric: took 3.960135ms for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:46.259190   69396 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:48.265668   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:50.266378   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:52.764309   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:55.265009   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:57.265175   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:59.765185   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:01.765969   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:04.265391   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:06.764280   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:08.764536   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:10.764823   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:13.264898   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:15.265325   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:17.266059   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:19.266361   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:21.765507   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:24.265405   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:26.766002   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:28.766414   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:31.265682   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:33.266266   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:35.765735   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:38.264750   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:40.265867   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:43.315305   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:45.765728   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:48.267881   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:50.765134   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:52.765631   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:55.265675   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:57.765968   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:00.265461   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:02.765858   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:05.265453   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:07.765808   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:10.265864   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:12.266250   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.765953   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.266323   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.765581   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.265882   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.764338   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.766071   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:29.264366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.264892   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:33.764774   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.764854   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.765730   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.767830   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.265799   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:44.265972   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.765113   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:49.265367   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.765180   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:54.265141   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.265203   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.265900   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.765897   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.265622   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.765054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.765605   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.264844   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:12.765530   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.265832   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:17.765291   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.765695   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.264793   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.265167   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.265742   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.765734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.266015   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.765617   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.265646   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:38.266295   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.765177   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:43.265601   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.765111   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:47.765359   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.266104   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.266221   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.765284   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:56.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.266054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.766023   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:04.266171   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.765288   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:11.265066   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.265843   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.766366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.265607   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.765484   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.265011   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.265712   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.765386   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.766041   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:33.766304   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.265533   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.766734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.264746   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.265779   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:46.259360   69396 pod_ready.go:82] duration metric: took 4m0.000152356s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:46.259407   69396 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:46.259422   69396 pod_ready.go:39] duration metric: took 4m14.538674469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:46.259449   69396 kubeadm.go:597] duration metric: took 4m21.955300548s to restartPrimaryControlPlane
	W0127 11:46:46.259525   69396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:46.259559   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:47:13.916547   69396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.656958711s)
	I0127 11:47:13.916611   69396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:13.933947   69396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:13.945813   69396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:13.956760   69396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:13.956784   69396 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:13.956829   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:13.967874   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:13.967928   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:13.978307   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:13.988624   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:13.988681   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:14.000424   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.012062   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:14.012123   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.021263   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:14.031880   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:14.031940   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:14.043324   69396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:14.085914   69396 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:14.085997   69396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:14.183080   69396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:14.183249   69396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:14.183394   69396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:14.195440   69396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:14.197259   69396 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:14.197356   69396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:14.197854   69396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:14.198266   69396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:14.198428   69396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:14.198787   69396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:14.200947   69396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:14.201202   69396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:14.201438   69396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:14.201742   69396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:14.201820   69396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:14.201962   69396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:14.202056   69396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:14.393335   69396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:14.578877   69396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:14.683103   69396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:14.892112   69396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:15.059210   69396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:15.059802   69396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:15.062493   69396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:15.064304   69396 out.go:235]   - Booting up control plane ...
	I0127 11:47:15.064419   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:15.064539   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:15.064632   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:15.081619   69396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:15.087804   69396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:15.087864   69396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:15.215883   69396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:15.216024   69396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:15.717623   69396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.507256ms
	I0127 11:47:15.717711   69396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:20.718798   69396 kubeadm.go:310] [api-check] The API server is healthy after 5.001299318s
	I0127 11:47:20.735824   69396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:20.751647   69396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:20.776203   69396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:20.776453   69396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-273200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:20.786999   69396 kubeadm.go:310] [bootstrap-token] Using token: tjwk8y.hsba31n3brg7yicx
	I0127 11:47:20.788426   69396 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:20.788582   69396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:20.793089   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:20.803401   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:20.812287   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:20.816685   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:20.822172   69396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:21.128937   69396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:21.553347   69396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:22.127179   69396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:22.127210   69396 kubeadm.go:310] 
	I0127 11:47:22.127314   69396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:22.127342   69396 kubeadm.go:310] 
	I0127 11:47:22.127419   69396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:22.127428   69396 kubeadm.go:310] 
	I0127 11:47:22.127467   69396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:22.127532   69396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:22.127584   69396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:22.127594   69396 kubeadm.go:310] 
	I0127 11:47:22.127682   69396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:22.127691   69396 kubeadm.go:310] 
	I0127 11:47:22.127757   69396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:22.127768   69396 kubeadm.go:310] 
	I0127 11:47:22.127848   69396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:22.127969   69396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:22.128089   69396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:22.128103   69396 kubeadm.go:310] 
	I0127 11:47:22.128204   69396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:22.128331   69396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:22.128350   69396 kubeadm.go:310] 
	I0127 11:47:22.128485   69396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.128622   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:22.128658   69396 kubeadm.go:310] 	--control-plane 
	I0127 11:47:22.128669   69396 kubeadm.go:310] 
	I0127 11:47:22.128793   69396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:22.128805   69396 kubeadm.go:310] 
	I0127 11:47:22.128921   69396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.129015   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:22.129734   69396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:22.129770   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:47:22.129781   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:22.131454   69396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:22.132751   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:22.143934   69396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:22.162031   69396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:22.162109   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.162131   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-273200 minikube.k8s.io/updated_at=2025_01_27T11_47_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-273200 minikube.k8s.io/primary=true
	I0127 11:47:22.357159   69396 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:22.357255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.858227   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.357378   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.858261   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.358001   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.858052   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.358029   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.858255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.357827   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.545723   69396 kubeadm.go:1113] duration metric: took 4.38367816s to wait for elevateKubeSystemPrivileges
	I0127 11:47:26.545828   69396 kubeadm.go:394] duration metric: took 5m2.297374967s to StartCluster
	I0127 11:47:26.545882   69396 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.545994   69396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:26.548122   69396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.548782   69396 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:26.548545   69396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:26.548897   69396 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:26.549176   69396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-273200"
	I0127 11:47:26.549197   69396 addons.go:238] Setting addon storage-provisioner=true in "no-preload-273200"
	W0127 11:47:26.549209   69396 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:47:26.549239   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.549690   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.549730   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.549955   69396 addons.go:69] Setting default-storageclass=true in profile "no-preload-273200"
	I0127 11:47:26.549974   69396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-273200"
	I0127 11:47:26.550340   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.550368   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.550531   69396 addons.go:69] Setting metrics-server=true in profile "no-preload-273200"
	I0127 11:47:26.550551   69396 addons.go:238] Setting addon metrics-server=true in "no-preload-273200"
	W0127 11:47:26.550559   69396 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:26.550590   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550587   69396 addons.go:69] Setting dashboard=true in profile "no-preload-273200"
	I0127 11:47:26.550619   69396 addons.go:238] Setting addon dashboard=true in "no-preload-273200"
	W0127 11:47:26.550629   69396 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:26.550671   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550795   69396 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:26.550980   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551018   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.551086   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551125   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.552072   69396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:26.591135   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0127 11:47:26.591160   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0127 11:47:26.591337   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0127 11:47:26.591436   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0127 11:47:26.591962   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.591974   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592254   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592532   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592551   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592661   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592682   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592699   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592683   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.593029   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593065   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593226   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.593239   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593679   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593720   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.593787   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593821   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.596147   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.600142   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.600157   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.602457   69396 addons.go:238] Setting addon default-storageclass=true in "no-preload-273200"
	W0127 11:47:26.602479   69396 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:26.602510   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.602874   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.602914   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.604120   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.608202   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.608245   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.617629   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0127 11:47:26.618396   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.618963   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.618984   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.619363   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.619536   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.621603   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.623294   69396 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:26.625658   69396 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:26.626912   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:26.626933   69396 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:26.626955   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.630583   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0127 11:47:26.630587   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 11:47:26.631073   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.631690   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.631710   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.631883   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.632167   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.632324   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.632658   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.632673   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.633439   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.633559   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.633993   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.634505   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.634533   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.634773   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.634922   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.635051   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.635188   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.636019   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.636059   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.642473   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 11:47:26.645166   69396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:26.646249   69396 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:26.646264   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:26.646281   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.651734   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.651803   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.651826   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.651843   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.652136   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.659702   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.659915   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.663957   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0127 11:47:26.664289   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665037   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665168   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665183   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665558   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.665749   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665761   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665970   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.666585   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.666886   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.667729   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669615   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669619   69396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:26.669962   69396 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:26.669979   69396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:26.669998   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.670903   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:26.670919   69396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:26.670935   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.675429   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678600   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678659   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678709   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678726   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678749   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678771   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678781   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678803   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.678993   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.679036   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679128   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679182   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.679386   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.875833   69396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:26.920571   69396 node_ready.go:35] waiting up to 6m0s for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939903   69396 node_ready.go:49] node "no-preload-273200" has status "Ready":"True"
	I0127 11:47:26.939926   69396 node_ready.go:38] duration metric: took 19.319573ms for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939937   69396 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:26.959191   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:27.008467   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:27.081273   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:27.081304   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:27.101527   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:27.152011   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:27.152043   69396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:27.244718   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:27.244747   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:27.252472   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.252495   69396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:27.296605   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.313892   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:27.313920   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:27.403990   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:27.404022   69396 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:27.477781   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:27.477811   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:27.571056   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:27.571086   69396 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:27.705284   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:27.705316   69396 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:27.789319   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:27.789349   69396 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:27.870737   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:27.870774   69396 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:27.935415   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:27.935444   69396 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:27.990927   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:28.098209   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089707756s)
	I0127 11:47:28.098259   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098271   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098370   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098402   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098565   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098581   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098609   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098618   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098707   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098721   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098730   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098738   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098839   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.098925   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098945   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.099049   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.099059   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.099062   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.114073   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.114099   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.114382   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.114404   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.614645   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.317992457s)
	I0127 11:47:28.614719   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.614737   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.615709   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.615736   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.615759   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.615779   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.615792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.617426   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.617436   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.617454   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.617473   69396 addons.go:479] Verifying addon metrics-server=true in "no-preload-273200"
	I0127 11:47:28.972192   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.485321   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.494345914s)
	I0127 11:47:29.485395   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485413   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.485754   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.485774   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.485784   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.486141   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:29.486164   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.486172   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.487790   69396 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-273200 addons enable metrics-server
	
	I0127 11:47:29.489175   69396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:29.490582   69396 addons.go:514] duration metric: took 2.941688444s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:31.467084   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.966528   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.970381   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:36.467240   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.467270   69396 pod_ready.go:82] duration metric: took 9.508045614s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.467284   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474274   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.474309   69396 pod_ready.go:82] duration metric: took 7.015963ms for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474322   69396 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480897   69396 pod_ready.go:93] pod "etcd-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.480926   69396 pod_ready.go:82] duration metric: took 6.596204ms for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480938   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487288   69396 pod_ready.go:93] pod "kube-apiserver-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.487320   69396 pod_ready.go:82] duration metric: took 6.372473ms for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487332   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497692   69396 pod_ready.go:93] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.497721   69396 pod_ready.go:82] duration metric: took 10.381356ms for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497733   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864696   69396 pod_ready.go:93] pod "kube-proxy-mct6v" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.864728   69396 pod_ready.go:82] duration metric: took 366.98634ms for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864742   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265304   69396 pod_ready.go:93] pod "kube-scheduler-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:37.265326   69396 pod_ready.go:82] duration metric: took 400.576908ms for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265334   69396 pod_ready.go:39] duration metric: took 10.325386118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:37.265347   69396 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:37.265391   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:37.284810   69396 api_server.go:72] duration metric: took 10.735955735s to wait for apiserver process to appear ...
	I0127 11:47:37.284832   69396 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:37.284859   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:47:37.292026   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0127 11:47:37.293646   69396 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:37.293675   69396 api_server.go:131] duration metric: took 8.835297ms to wait for apiserver health ...
	I0127 11:47:37.293685   69396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:37.469184   69396 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:37.469220   69396 system_pods.go:61] "coredns-668d6bf9bc-nqskc" [a9b24f06-5dc0-4a9e-a8f4-c6f311389c62] Running
	I0127 11:47:37.469228   69396 system_pods.go:61] "coredns-668d6bf9bc-qh6rg" [05780b99-a232-4846-a4b6-111f8d3d386e] Running
	I0127 11:47:37.469234   69396 system_pods.go:61] "etcd-no-preload-273200" [d1362a7f-ee18-4157-b8df-b9a3a9372f0a] Running
	I0127 11:47:37.469240   69396 system_pods.go:61] "kube-apiserver-no-preload-273200" [32c9d6be-2aac-475a-b7ba-0414122f7c6b] Running
	I0127 11:47:37.469247   69396 system_pods.go:61] "kube-controller-manager-no-preload-273200" [1091690b-7b66-4f8d-aa90-567ff97c5c19] Running
	I0127 11:47:37.469252   69396 system_pods.go:61] "kube-proxy-mct6v" [7cd1c7e8-827a-491e-8093-a7a3afc26544] Running
	I0127 11:47:37.469257   69396 system_pods.go:61] "kube-scheduler-no-preload-273200" [fde979de-7c70-4ef8-8d23-6ed01a30bf76] Running
	I0127 11:47:37.469265   69396 system_pods.go:61] "metrics-server-f79f97bbb-z6fn6" [8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:37.469270   69396 system_pods.go:61] "storage-provisioner" [42d86701-11bb-4b1c-a522-ec9e7912d024] Running
	I0127 11:47:37.469280   69396 system_pods.go:74] duration metric: took 175.587004ms to wait for pod list to return data ...
	I0127 11:47:37.469292   69396 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:37.664628   69396 default_sa.go:45] found service account: "default"
	I0127 11:47:37.664664   69396 default_sa.go:55] duration metric: took 195.36433ms for default service account to be created ...
	I0127 11:47:37.664679   69396 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:37.868541   69396 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-273200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273200 -n no-preload-273200
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-273200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-273200 logs -n 25: (1.389134185s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-429764 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | disable-driver-mounts-429764                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:41 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-273200             | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-986409            | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-407489  | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:43 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273200                  | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-986409                 | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC | 27 Jan 25 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570778        | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-407489       | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC | 27 Jan 25 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC |                     |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570778             | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 12:07 UTC | 27 Jan 25 12:07 UTC |
	| start   | -p newest-cni-929622 --memory=2200 --alsologtostderr   | newest-cni-929622            | jenkins | v1.35.0 | 27 Jan 25 12:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:07:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:07:43.826685   76183 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:07:43.826833   76183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:07:43.826845   76183 out.go:358] Setting ErrFile to fd 2...
	I0127 12:07:43.826851   76183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:07:43.827024   76183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 12:07:43.827651   76183 out.go:352] Setting JSON to false
	I0127 12:07:43.828819   76183 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10164,"bootTime":1737969500,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:07:43.828954   76183 start.go:139] virtualization: kvm guest
	I0127 12:07:43.831381   76183 out.go:177] * [newest-cni-929622] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:07:43.833056   76183 notify.go:220] Checking for updates...
	I0127 12:07:43.833059   76183 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 12:07:43.834579   76183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:07:43.835844   76183 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 12:07:43.837106   76183 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 12:07:43.838455   76183 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:07:43.839772   76183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:07:43.841425   76183 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:07:43.841551   76183 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:07:43.841664   76183 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:07:43.841761   76183 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:07:43.880782   76183 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:07:43.882191   76183 start.go:297] selected driver: kvm2
	I0127 12:07:43.882206   76183 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:07:43.882218   76183 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:07:43.882965   76183 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:07:43.883056   76183 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:07:43.898864   76183 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:07:43.898919   76183 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0127 12:07:43.898967   76183 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0127 12:07:43.899205   76183 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 12:07:43.899233   76183 cni.go:84] Creating CNI manager for ""
	I0127 12:07:43.899274   76183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:07:43.899287   76183 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:07:43.899389   76183 start.go:340] cluster config:
	{Name:newest-cni-929622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-929622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:07:43.899493   76183 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:07:43.901516   76183 out.go:177] * Starting "newest-cni-929622" primary control-plane node in "newest-cni-929622" cluster
	I0127 12:07:43.902631   76183 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:07:43.902662   76183 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:07:43.902675   76183 cache.go:56] Caching tarball of preloaded images
	I0127 12:07:43.902761   76183 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:07:43.902779   76183 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:07:43.902877   76183 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/newest-cni-929622/config.json ...
	I0127 12:07:43.902901   76183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/newest-cni-929622/config.json: {Name:mk33658f7015957446b8dbfd7aa4082e87bddc39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:07:43.903064   76183 start.go:360] acquireMachinesLock for newest-cni-929622: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:07:43.903100   76183 start.go:364] duration metric: took 18.966µs to acquireMachinesLock for "newest-cni-929622"
	I0127 12:07:43.903122   76183 start.go:93] Provisioning new machine with config: &{Name:newest-cni-929622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-929622
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:07:43.903198   76183 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.429651235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0b0db3e-fc01-4fee-a25f-3931d54c98f4 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.440826047Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bceac829-bb81-4a27-8423-e5c064898049 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.441101541Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ff796c7f51c49f1b0ea35dbddec81015cb24f9eb682b52a67e71d3f059da4501,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-86c6bf9756-pbtjt,Uid:59f37677-c2ea-4dcb-b18a-92b5053279e2,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978449596307457,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-pbtjt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 59f37677-c2ea-4dcb-b18a-92b5053279e2,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:29.282892263Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:86b52b5dbef282dc36b8324862cdede340f2edf74a1e48226da1e
7d0dfa75ec2,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-54km5,Uid:a270c60b-6c89-4565-a877-2b2d18ccea96,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978449594296752,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-54km5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a270c60b-6c89-4565-a877-2b2d18ccea96,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:29.287497017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e81e424599a2669c6275432ddc9166d0c583814efb1018429ae9446327344b4,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-z6fn6,Uid:8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978448664740918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.po
d.name: metrics-server-f79f97bbb-z6fn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:28.354605644Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:936cbd88dc461f5279c134d712d7411d729401a23bb9e60b522439cf04e1c7e0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:42d86701-11bb-4b1c-a522-ec9e7912d024,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978448405331281,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d86701-11bb-4b1c-a522-ec9e7912d024,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-27T11:47:28.089748540Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86d2bc348b97114a23da322c3b4e8f3da5940ec078f60fd65bc089b6afb477bb,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-qh6rg,Uid:05780b99-a232-4846-a4b6-111f8d3d386e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978446837185874,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-qh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05780b99-a232-4846-a4b6-111f8d3d386e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:26.506348100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad551636c7ef889bfbe8d5b7d23cb52a0824787e1341da0635675a2b54cb854b,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-nqskc,Uid:a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978446775979153,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-nqskc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:26.467914691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandb
ox{Id:0cc6542aa226a6f7a37779237b59d3f3ec42ef7e7843042d7d7b0761d8b18bf5,Metadata:&PodSandboxMetadata{Name:kube-proxy-mct6v,Uid:7cd1c7e8-827a-491e-8093-a7a3afc26544,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978446716476074,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mct6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd1c7e8-827a-491e-8093-a7a3afc26544,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:26.388573285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:543aa557d6254fa0341e8b9c698687713df805016b442e3b8be6e2786c9fe602,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-273200,Uid:15dd5f88732089acf60f6d52b03f504f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737978436079601840,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.181:8443,kubernetes.io/config.hash: 15dd5f88732089acf60f6d52b03f504f,kubernetes.io/config.seen: 2025-01-27T11:47:15.634324521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f68f7af22299699ad8d5e79b1a4544b14588ffb2f6cf2fa6ab0f654aa3f8a661,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-273200,Uid:076f79ac4ca4261ce15bd6755b68a9d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978436072293693,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076f79ac4ca4261ce15bd6755b68a9d3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.a
dvertise-client-urls: https://192.168.61.181:2379,kubernetes.io/config.hash: 076f79ac4ca4261ce15bd6755b68a9d3,kubernetes.io/config.seen: 2025-01-27T11:47:15.634323114Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c2b2737eed09bdad0622c00ecb8391c65bc0716e2c3cc0430f72c0b2534b8653,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-273200,Uid:c0434b5a1d1b98ff8e214be0a6adbf51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978436071250806,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0434b5a1d1b98ff8e214be0a6adbf51,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0434b5a1d1b98ff8e214be0a6adbf51,kubernetes.io/config.seen: 2025-01-27T11:47:15.634321769Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa30a4a239f073626f893a6c04a10f2b883e63082bd78bfaf99a6e1a
a9df4c8f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-273200,Uid:a11b65464b6fc49e0eba257d7cbc4ffb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978436061982045,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11b65464b6fc49e0eba257d7cbc4ffb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a11b65464b6fc49e0eba257d7cbc4ffb,kubernetes.io/config.seen: 2025-01-27T11:47:15.634317127Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bceac829-bb81-4a27-8423-e5c064898049 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.441702260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=307b9ff5-0784-4900-8c1e-07301ff6df7d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.441775334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=307b9ff5-0784-4900-8c1e-07301ff6df7d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.441962761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8aee085b459f70312909e3e39bd5e5cbaafc4e2e5280f15d1de276cb74ca3c,PodSandboxId:86b52b5dbef282dc36b8324862cdede340f2edf74a1e48226da1e7d0dfa75ec2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978460176020790,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-54km5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a270c60b-6c89-4565-a877-2b2d18ccea96,},Annotations:map[string]string{io.kubernetes.container.hash
: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28364fe38ad4002695585ed2ecb452e99ba9934cf8652247e197778856d36a6,PodSandboxId:936cbd88dc461f5279c134d712d7411d729401a23bb9e60b522439cf04e1c7e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978448812485695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d86701-11bb-4b1c-a522-ec9e79
12d024,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71edcc25ff527f705f44b987cff768fc94454f9d1378394334a5f285a78e3e8,PodSandboxId:86d2bc348b97114a23da322c3b4e8f3da5940ec078f60fd65bc089b6afb477bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447917298386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05780b99-a232-4846-a4b6-111f8d3d386e,},Annotations:map[string
]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81eff67e5c95ed05db0f384675e83cfaf18bd4d2acddf69d188e55432ca909c,PodSandboxId:ad551636c7ef889bfbe8d5b7d23cb52a0824787e1341da0635675a2b54cb854b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447548774359,Labels:map[string]string{io.kubernetes.con
tainer.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nqskc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a517d2fa35e6b6974af855edd8ab558d86b8b0ec82420132bf0eef2046ea84,PodSandboxId:0cc6542aa226a6f7a37779237b59d3f3ec42ef7e7843042d7d7b0761d8b18bf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978446923979359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mct6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd1c7e8-827a-491e-8093-a7a3afc26544,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217a518f8cb24ea3706af66c26e83bfd0614e46c32742e3ccc6c1e3e6db50b39,PodSandboxId:543aa557d6254fa0341e8b9c698687713df805016b442e3b8be6e2786c9fe602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978436259139313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73f73301eea99ce4d4646b33afcff610c9e3390df3af1c22d0b1913ab71f962,PodSandboxId:c2b2737eed09bdad0622c00ecb8391c65bc0716e2c3cc0430f72c0b2534b8653,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6
572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978436279725883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0434b5a1d1b98ff8e214be0a6adbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8267177703ec23aa6b363d37bf6b14e40f610b5e4fec54174f3f2b3a5e30952,PodSandboxId:f68f7af22299699ad8d5e79b1a4544b14588ffb2f6cf2fa6ab0f654aa3f8a661,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956
c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978436216068716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076f79ac4ca4261ce15bd6755b68a9d3,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fbcc5e89b5e89164cf30b5aea6d5e1b1864aedb5bc31b13aee3cce2de1e837,PodSandboxId:fa30a4a239f073626f893a6c04a10f2b883e63082bd78bfaf99a6e1aa9df4c8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832
908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978436195863915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11b65464b6fc49e0eba257d7cbc4ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=307b9ff5-0784-4900-8c1e-07301ff6df7d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.466761094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1455c62-8ef1-4ea4-9947-ac9398b25acb name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.466854565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1455c62-8ef1-4ea4-9947-ac9398b25acb name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.468141989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6a60844-bfca-4549-bdc0-5dc1c8e5e120 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.468651342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979665468627344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6a60844-bfca-4549-bdc0-5dc1c8e5e120 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.469507765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a35f212-42a2-4bb1-af7c-222306a84586 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.469570161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a35f212-42a2-4bb1-af7c-222306a84586 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.469793610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732,PodSandboxId:ff796c7f51c49f1b0ea35dbddec81015cb24f9eb682b52a67e71d3f059da4501,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979407452016726,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-pbtjt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 59f37677-c2ea-4dcb-b18a-92b5053279e2,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8aee085b459f70312909e3e39bd5e5cbaafc4e2e5280f15d1de276cb74ca3c,PodSandboxId:86b52b5dbef282dc36b8324862cdede340f2edf74a1e48226da1e7d0dfa75ec2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978460176020790,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-54km5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: a270c60b-6c89-4565-a877-2b2d18ccea96,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28364fe38ad4002695585ed2ecb452e99ba9934cf8652247e197778856d36a6,PodSandboxId:936cbd88dc461f5279c134d712d7411d729401a23bb9e60b522439cf04e1c7e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978448812485695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d86701-11bb-4b1c-a522-ec9e7912d024,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71edcc25ff527f705f44b987cff768fc94454f9d1378394334a5f285a78e3e8,PodSandboxId:86d2bc348b97114a23da322c3b4e8f3da5940ec078f60fd65bc089b6afb477bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447917298386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qh6rg,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 05780b99-a232-4846-a4b6-111f8d3d386e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81eff67e5c95ed05db0f384675e83cfaf18bd4d2acddf69d188e55432ca909c,PodSandboxId:ad551636c7ef889bfbe8d5b7d23cb52a0824787e1341da0635675a2b54cb854b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447548774359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nqskc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a517d2fa35e6b6974af855edd8ab558d86b8b0ec82420132bf0eef2046ea84,PodSandboxId:0cc6542aa226a6f7a37779237b59d3f3ec42ef7e7843042d7d7b0761d8b18bf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978446923979359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mct6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd1c7e8-827a-491e-8093-a7a3afc26544,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217a518f8cb24ea3706af66c26e83bfd0614e46c32742e3ccc6c1e3e6db50b39,PodSandboxId:543aa557d6254fa0341e8b9c698687713df805016b442e3b8be6e2786c9fe602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c
04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978436259139313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73f73301eea99ce4d4646b33afcff610c9e3390df3af1c22d0b1913ab71f962,PodSandboxId:c2b2737eed09bdad0622c00ecb8391c65bc0716e2c3cc0430f72c0b2534b8653,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3
da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978436279725883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0434b5a1d1b98ff8e214be0a6adbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8267177703ec23aa6b363d37bf6b14e40f610b5e4fec54174f3f2b3a5e30952,PodSandboxId:f68f7af22299699ad8d5e79b1a4544b14588ffb2f6cf2fa6ab0f654aa3f8a661,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abb
fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978436216068716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076f79ac4ca4261ce15bd6755b68a9d3,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fbcc5e89b5e89164cf30b5aea6d5e1b1864aedb5bc31b13aee3cce2de1e837,PodSandboxId:fa30a4a239f073626f893a6c04a10f2b883e63082bd78bfaf99a6e1aa9df4c8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978436195863915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11b65464b6fc49e0eba257d7cbc4ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff73bdcc97d74fcf97a3b1865aefd70276c6713e65315cbe394eb8af31531ff,PodSandboxId:87790d56308e084a9604af36843f218dca854dfc4edfe1a0f466cb7019a7af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978146913839633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a35f212-42a2-4bb1-af7c-222306a84586 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.472490964Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5a327c91-dd5f-4147-8b46-c866472e55dd name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.472956119Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ff796c7f51c49f1b0ea35dbddec81015cb24f9eb682b52a67e71d3f059da4501,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-86c6bf9756-pbtjt,Uid:59f37677-c2ea-4dcb-b18a-92b5053279e2,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978449596307457,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-pbtjt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 59f37677-c2ea-4dcb-b18a-92b5053279e2,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:29.282892263Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:86b52b5dbef282dc36b8324862cdede340f2edf74a1e48226da1e
7d0dfa75ec2,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-54km5,Uid:a270c60b-6c89-4565-a877-2b2d18ccea96,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978449594296752,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-54km5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a270c60b-6c89-4565-a877-2b2d18ccea96,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:29.287497017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e81e424599a2669c6275432ddc9166d0c583814efb1018429ae9446327344b4,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-z6fn6,Uid:8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978448664740918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.po
d.name: metrics-server-f79f97bbb-z6fn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:28.354605644Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:936cbd88dc461f5279c134d712d7411d729401a23bb9e60b522439cf04e1c7e0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:42d86701-11bb-4b1c-a522-ec9e7912d024,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978448405331281,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d86701-11bb-4b1c-a522-ec9e7912d024,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-27T11:47:28.089748540Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86d2bc348b97114a23da322c3b4e8f3da5940ec078f60fd65bc089b6afb477bb,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-qh6rg,Uid:05780b99-a232-4846-a4b6-111f8d3d386e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978446837185874,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-qh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05780b99-a232-4846-a4b6-111f8d3d386e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:26.506348100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad551636c7ef889bfbe8d5b7d23cb52a0824787e1341da0635675a2b54cb854b,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-nqskc,Uid:a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978446775979153,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-nqskc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:26.467914691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandb
ox{Id:0cc6542aa226a6f7a37779237b59d3f3ec42ef7e7843042d7d7b0761d8b18bf5,Metadata:&PodSandboxMetadata{Name:kube-proxy-mct6v,Uid:7cd1c7e8-827a-491e-8093-a7a3afc26544,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978446716476074,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mct6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd1c7e8-827a-491e-8093-a7a3afc26544,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:47:26.388573285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:543aa557d6254fa0341e8b9c698687713df805016b442e3b8be6e2786c9fe602,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-273200,Uid:15dd5f88732089acf60f6d52b03f504f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737978436079601840,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.181:8443,kubernetes.io/config.hash: 15dd5f88732089acf60f6d52b03f504f,kubernetes.io/config.seen: 2025-01-27T11:47:15.634324521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f68f7af22299699ad8d5e79b1a4544b14588ffb2f6cf2fa6ab0f654aa3f8a661,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-273200,Uid:076f79ac4ca4261ce15bd6755b68a9d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978436072293693,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076f79ac4ca4261ce15bd6755b68a9d3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.a
dvertise-client-urls: https://192.168.61.181:2379,kubernetes.io/config.hash: 076f79ac4ca4261ce15bd6755b68a9d3,kubernetes.io/config.seen: 2025-01-27T11:47:15.634323114Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c2b2737eed09bdad0622c00ecb8391c65bc0716e2c3cc0430f72c0b2534b8653,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-273200,Uid:c0434b5a1d1b98ff8e214be0a6adbf51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978436071250806,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0434b5a1d1b98ff8e214be0a6adbf51,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0434b5a1d1b98ff8e214be0a6adbf51,kubernetes.io/config.seen: 2025-01-27T11:47:15.634321769Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa30a4a239f073626f893a6c04a10f2b883e63082bd78bfaf99a6e1a
a9df4c8f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-273200,Uid:a11b65464b6fc49e0eba257d7cbc4ffb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978436061982045,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11b65464b6fc49e0eba257d7cbc4ffb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a11b65464b6fc49e0eba257d7cbc4ffb,kubernetes.io/config.seen: 2025-01-27T11:47:15.634317127Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:87790d56308e084a9604af36843f218dca854dfc4edfe1a0f466cb7019a7af10,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-273200,Uid:15dd5f88732089acf60f6d52b03f504f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1737978146760871223,Labels:map[string]string{component: kube-apiserver,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.181:8443,kubernetes.io/config.hash: 15dd5f88732089acf60f6d52b03f504f,kubernetes.io/config.seen: 2025-01-27T11:42:26.269649993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5a327c91-dd5f-4147-8b46-c866472e55dd name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.473927445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54a2e942-387c-4fb7-8bd0-14d8d798f4f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.474054435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54a2e942-387c-4fb7-8bd0-14d8d798f4f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.474599589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732,PodSandboxId:ff796c7f51c49f1b0ea35dbddec81015cb24f9eb682b52a67e71d3f059da4501,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979407452016726,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-pbtjt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 59f37677-c2ea-4dcb-b18a-92b5053279e2,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8aee085b459f70312909e3e39bd5e5cbaafc4e2e5280f15d1de276cb74ca3c,PodSandboxId:86b52b5dbef282dc36b8324862cdede340f2edf74a1e48226da1e7d0dfa75ec2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978460176020790,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-54km5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: a270c60b-6c89-4565-a877-2b2d18ccea96,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28364fe38ad4002695585ed2ecb452e99ba9934cf8652247e197778856d36a6,PodSandboxId:936cbd88dc461f5279c134d712d7411d729401a23bb9e60b522439cf04e1c7e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978448812485695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d86701-11bb-4b1c-a522-ec9e7912d024,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71edcc25ff527f705f44b987cff768fc94454f9d1378394334a5f285a78e3e8,PodSandboxId:86d2bc348b97114a23da322c3b4e8f3da5940ec078f60fd65bc089b6afb477bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447917298386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qh6rg,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 05780b99-a232-4846-a4b6-111f8d3d386e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81eff67e5c95ed05db0f384675e83cfaf18bd4d2acddf69d188e55432ca909c,PodSandboxId:ad551636c7ef889bfbe8d5b7d23cb52a0824787e1341da0635675a2b54cb854b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447548774359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nqskc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a517d2fa35e6b6974af855edd8ab558d86b8b0ec82420132bf0eef2046ea84,PodSandboxId:0cc6542aa226a6f7a37779237b59d3f3ec42ef7e7843042d7d7b0761d8b18bf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978446923979359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mct6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd1c7e8-827a-491e-8093-a7a3afc26544,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217a518f8cb24ea3706af66c26e83bfd0614e46c32742e3ccc6c1e3e6db50b39,PodSandboxId:543aa557d6254fa0341e8b9c698687713df805016b442e3b8be6e2786c9fe602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c
04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978436259139313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73f73301eea99ce4d4646b33afcff610c9e3390df3af1c22d0b1913ab71f962,PodSandboxId:c2b2737eed09bdad0622c00ecb8391c65bc0716e2c3cc0430f72c0b2534b8653,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3
da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978436279725883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0434b5a1d1b98ff8e214be0a6adbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8267177703ec23aa6b363d37bf6b14e40f610b5e4fec54174f3f2b3a5e30952,PodSandboxId:f68f7af22299699ad8d5e79b1a4544b14588ffb2f6cf2fa6ab0f654aa3f8a661,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abb
fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978436216068716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076f79ac4ca4261ce15bd6755b68a9d3,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fbcc5e89b5e89164cf30b5aea6d5e1b1864aedb5bc31b13aee3cce2de1e837,PodSandboxId:fa30a4a239f073626f893a6c04a10f2b883e63082bd78bfaf99a6e1aa9df4c8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978436195863915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11b65464b6fc49e0eba257d7cbc4ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff73bdcc97d74fcf97a3b1865aefd70276c6713e65315cbe394eb8af31531ff,PodSandboxId:87790d56308e084a9604af36843f218dca854dfc4edfe1a0f466cb7019a7af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978146913839633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54a2e942-387c-4fb7-8bd0-14d8d798f4f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.517723345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50dbb5f8-39e2-496c-b782-c98a914a2f3f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.517795346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50dbb5f8-39e2-496c-b782-c98a914a2f3f name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.518689813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de5e2ce6-dfad-4078-a534-062267b12e16 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.519050250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979665519031724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de5e2ce6-dfad-4078-a534-062267b12e16 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.519677793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06a8f65a-3dda-44ab-9def-9dc53b9b76eb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.519731070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06a8f65a-3dda-44ab-9def-9dc53b9b76eb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:45 no-preload-273200 crio[722]: time="2025-01-27 12:07:45.519978281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732,PodSandboxId:ff796c7f51c49f1b0ea35dbddec81015cb24f9eb682b52a67e71d3f059da4501,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979407452016726,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-pbtjt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 59f37677-c2ea-4dcb-b18a-92b5053279e2,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8aee085b459f70312909e3e39bd5e5cbaafc4e2e5280f15d1de276cb74ca3c,PodSandboxId:86b52b5dbef282dc36b8324862cdede340f2edf74a1e48226da1e7d0dfa75ec2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978460176020790,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-54km5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: a270c60b-6c89-4565-a877-2b2d18ccea96,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28364fe38ad4002695585ed2ecb452e99ba9934cf8652247e197778856d36a6,PodSandboxId:936cbd88dc461f5279c134d712d7411d729401a23bb9e60b522439cf04e1c7e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978448812485695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d86701-11bb-4b1c-a522-ec9e7912d024,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71edcc25ff527f705f44b987cff768fc94454f9d1378394334a5f285a78e3e8,PodSandboxId:86d2bc348b97114a23da322c3b4e8f3da5940ec078f60fd65bc089b6afb477bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447917298386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qh6rg,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 05780b99-a232-4846-a4b6-111f8d3d386e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81eff67e5c95ed05db0f384675e83cfaf18bd4d2acddf69d188e55432ca909c,PodSandboxId:ad551636c7ef889bfbe8d5b7d23cb52a0824787e1341da0635675a2b54cb854b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978447548774359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nqskc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9b24f06-5dc0-4a9e-a8f4-c6f311389c62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a517d2fa35e6b6974af855edd8ab558d86b8b0ec82420132bf0eef2046ea84,PodSandboxId:0cc6542aa226a6f7a37779237b59d3f3ec42ef7e7843042d7d7b0761d8b18bf5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978446923979359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mct6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd1c7e8-827a-491e-8093-a7a3afc26544,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217a518f8cb24ea3706af66c26e83bfd0614e46c32742e3ccc6c1e3e6db50b39,PodSandboxId:543aa557d6254fa0341e8b9c698687713df805016b442e3b8be6e2786c9fe602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c
04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978436259139313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73f73301eea99ce4d4646b33afcff610c9e3390df3af1c22d0b1913ab71f962,PodSandboxId:c2b2737eed09bdad0622c00ecb8391c65bc0716e2c3cc0430f72c0b2534b8653,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3
da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978436279725883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0434b5a1d1b98ff8e214be0a6adbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8267177703ec23aa6b363d37bf6b14e40f610b5e4fec54174f3f2b3a5e30952,PodSandboxId:f68f7af22299699ad8d5e79b1a4544b14588ffb2f6cf2fa6ab0f654aa3f8a661,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abb
fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978436216068716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076f79ac4ca4261ce15bd6755b68a9d3,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fbcc5e89b5e89164cf30b5aea6d5e1b1864aedb5bc31b13aee3cce2de1e837,PodSandboxId:fa30a4a239f073626f893a6c04a10f2b883e63082bd78bfaf99a6e1aa9df4c8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978436195863915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11b65464b6fc49e0eba257d7cbc4ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff73bdcc97d74fcf97a3b1865aefd70276c6713e65315cbe394eb8af31531ff,PodSandboxId:87790d56308e084a9604af36843f218dca854dfc4edfe1a0f466cb7019a7af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978146913839633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273200,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dd5f88732089acf60f6d52b03f504f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06a8f65a-3dda-44ab-9def-9dc53b9b76eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	222425e838be6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   ff796c7f51c49       dashboard-metrics-scraper-86c6bf9756-pbtjt
	7b8aee085b459       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   86b52b5dbef28       kubernetes-dashboard-7779f9b69b-54km5
	d28364fe38ad4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 minutes ago      Running             storage-provisioner         0                   936cbd88dc461       storage-provisioner
	b71edcc25ff52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   86d2bc348b971       coredns-668d6bf9bc-qh6rg
	b81eff67e5c95       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   ad551636c7ef8       coredns-668d6bf9bc-nqskc
	46a517d2fa35e       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           20 minutes ago      Running             kube-proxy                  0                   0cc6542aa226a       kube-proxy-mct6v
	c73f73301eea9       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           20 minutes ago      Running             kube-scheduler              2                   c2b2737eed09b       kube-scheduler-no-preload-273200
	217a518f8cb24       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           20 minutes ago      Running             kube-apiserver              2                   543aa557d6254       kube-apiserver-no-preload-273200
	f8267177703ec       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           20 minutes ago      Running             etcd                        2                   f68f7af222996       etcd-no-preload-273200
	16fbcc5e89b5e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           20 minutes ago      Running             kube-controller-manager     2                   fa30a4a239f07       kube-controller-manager-no-preload-273200
	3ff73bdcc97d7       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           25 minutes ago      Exited              kube-apiserver              1                   87790d56308e0       kube-apiserver-no-preload-273200
	
	
	==> coredns [b71edcc25ff527f705f44b987cff768fc94454f9d1378394334a5f285a78e3e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b81eff67e5c95ed05db0f384675e83cfaf18bd4d2acddf69d188e55432ca909c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-273200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-273200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=no-preload-273200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_47_22_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:47:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-273200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:07:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:03:51 +0000   Mon, 27 Jan 2025 11:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:03:51 +0000   Mon, 27 Jan 2025 11:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:03:51 +0000   Mon, 27 Jan 2025 11:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:03:51 +0000   Mon, 27 Jan 2025 11:47:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.181
	  Hostname:    no-preload-273200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e334b5610b9849f399a042689bb47506
	  System UUID:                e334b561-0b98-49f3-99a0-42689bb47506
	  Boot ID:                    8369751c-063d-4331-a6f5-6594f1593bf9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-nqskc                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-qh6rg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-no-preload-273200                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-no-preload-273200              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-no-preload-273200     200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-mct6v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-no-preload-273200              100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-f79f97bbb-z6fn6                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-pbtjt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-54km5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node no-preload-273200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node no-preload-273200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node no-preload-273200 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node no-preload-273200 event: Registered Node no-preload-273200 in Controller
	
	
	==> dmesg <==
	[  +0.036604] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.820906] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.907707] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542316] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 11:42] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.056301] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059293] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.165068] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.118628] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.249364] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +15.056117] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.066378] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.364377] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +3.432133] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.923659] kauditd_printk_skb: 86 callbacks suppressed
	[Jan27 11:47] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.328413] systemd-fstab-generator[3229]: Ignoring "noauto" option for root device
	[  +4.561519] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.486803] systemd-fstab-generator[3569]: Ignoring "noauto" option for root device
	[  +5.463169] systemd-fstab-generator[3700]: Ignoring "noauto" option for root device
	[  +0.138725] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.890855] kauditd_printk_skb: 110 callbacks suppressed
	[  +6.548316] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [f8267177703ec23aa6b363d37bf6b14e40f610b5e4fec54174f3f2b3a5e30952] <==
	{"level":"info","ts":"2025-01-27T11:47:39.273549Z","caller":"traceutil/trace.go:171","msg":"trace[1690175724] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:516; }","duration":"231.191583ms","start":"2025-01-27T11:47:39.042342Z","end":"2025-01-27T11:47:39.273533Z","steps":["trace[1690175724] 'read index received'  (duration: 231.187469ms)","trace[1690175724] 'applied index is now lower than readState.Index'  (duration: 3.327µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T11:47:39.273761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.413938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:47:39.273787Z","caller":"traceutil/trace.go:171","msg":"trace[1251398932] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:501; }","duration":"231.473503ms","start":"2025-01-27T11:47:39.042305Z","end":"2025-01-27T11:47:39.273779Z","steps":["trace[1251398932] 'agreement among raft nodes before linearized reading'  (duration: 231.38361ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:47:39.436261Z","caller":"traceutil/trace.go:171","msg":"trace[1549644173] linearizableReadLoop","detail":"{readStateIndex:517; appliedIndex:516; }","duration":"162.574928ms","start":"2025-01-27T11:47:39.273665Z","end":"2025-01-27T11:47:39.436240Z","steps":["trace[1549644173] 'read index received'  (duration: 159.957428ms)","trace[1549644173] 'applied index is now lower than readState.Index'  (duration: 2.616761ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T11:47:39.436341Z","caller":"traceutil/trace.go:171","msg":"trace[805385135] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"265.006021ms","start":"2025-01-27T11:47:39.171330Z","end":"2025-01-27T11:47:39.436336Z","steps":["trace[805385135] 'process raft request'  (duration: 262.309472ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:47:39.436486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.488491ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T11:47:39.436517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.867883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:47:39.436537Z","caller":"traceutil/trace.go:171","msg":"trace[303154809] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:502; }","duration":"157.918558ms","start":"2025-01-27T11:47:39.278613Z","end":"2025-01-27T11:47:39.436531Z","steps":["trace[303154809] 'agreement among raft nodes before linearized reading'  (duration: 157.876143ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:47:39.436532Z","caller":"traceutil/trace.go:171","msg":"trace[909811459] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:502; }","duration":"321.547922ms","start":"2025-01-27T11:47:39.114972Z","end":"2025-01-27T11:47:39.436520Z","steps":["trace[909811459] 'agreement among raft nodes before linearized reading'  (duration: 321.470203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:47:39.436796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.503318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-01-27T11:47:39.436830Z","caller":"traceutil/trace.go:171","msg":"trace[377098034] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:502; }","duration":"241.557686ms","start":"2025-01-27T11:47:39.195260Z","end":"2025-01-27T11:47:39.436817Z","steps":["trace[377098034] 'agreement among raft nodes before linearized reading'  (duration: 241.392156ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:48:52.020059Z","caller":"traceutil/trace.go:171","msg":"trace[991563641] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"120.580517ms","start":"2025-01-27T11:48:51.899459Z","end":"2025-01-27T11:48:52.020040Z","steps":["trace[991563641] 'process raft request'  (duration: 120.459567ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:48:52.290705Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.833048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:48:52.290924Z","caller":"traceutil/trace.go:171","msg":"trace[270205487] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:608; }","duration":"103.138355ms","start":"2025-01-27T11:48:52.187769Z","end":"2025-01-27T11:48:52.290907Z","steps":["trace[270205487] 'count revisions from in-memory index tree'  (duration: 102.754963ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:48:52.291171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.087348ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T11:48:52.291835Z","caller":"traceutil/trace.go:171","msg":"trace[995651361] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:608; }","duration":"176.759653ms","start":"2025-01-27T11:48:52.115065Z","end":"2025-01-27T11:48:52.291825Z","steps":["trace[995651361] 'range keys from in-memory index tree'  (duration: 176.079325ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T11:57:17.221481Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2025-01-27T11:57:17.252876Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":829,"took":"31.026535ms","hash":3436485945,"current-db-size-bytes":2949120,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2949120,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T11:57:17.252982Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3436485945,"revision":829,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T12:02:17.229307Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1081}
	{"level":"info","ts":"2025-01-27T12:02:17.232902Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1081,"took":"3.233872ms","hash":79443372,"current-db-size-bytes":2949120,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:02:17.232982Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":79443372,"revision":1081,"compact-revision":829}
	{"level":"info","ts":"2025-01-27T12:07:17.238081Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1333}
	{"level":"info","ts":"2025-01-27T12:07:17.241606Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1333,"took":"3.257424ms","hash":193353985,"current-db-size-bytes":2949120,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-01-27T12:07:17.241657Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":193353985,"revision":1333,"compact-revision":1081}
	
	
	==> kernel <==
	 12:07:45 up 25 min,  0 users,  load average: 0.12, 0.18, 0.18
	Linux no-preload-273200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [217a518f8cb24ea3706af66c26e83bfd0614e46c32742e3ccc6c1e3e6db50b39] <==
	I0127 12:03:19.831656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:03:19.831708       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:05:19.832799       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 12:05:19.832933       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:05:19.833082       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 12:05:19.833142       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:05:19.834244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:05:19.834334       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:07:18.833893       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:18.834081       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:07:19.836881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:19.837000       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:07:19.836918       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:19.837118       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:07:19.838252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:07:19.838296       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [3ff73bdcc97d74fcf97a3b1865aefd70276c6713e65315cbe394eb8af31531ff] <==
	W0127 11:47:12.473285       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.535184       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.536566       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.541107       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.563949       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.566423       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.594742       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.685786       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.700276       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.785676       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.794053       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.844614       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.878583       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.943482       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:12.971023       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.134786       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.134786       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.138169       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.176731       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.199571       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.252315       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.360212       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.388234       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.401616       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:13.442697       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [16fbcc5e89b5e89164cf30b5aea6d5e1b1864aedb5bc31b13aee3cce2de1e837] <==
	E0127 12:03:25.533623       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:03:25.592004       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:03:27.574765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="18.262222ms"
	I0127 12:03:27.574848       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="37.424µs"
	I0127 12:03:28.589627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="14.334816ms"
	I0127 12:03:28.589932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="87.076µs"
	I0127 12:03:29.571171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="40.222µs"
	I0127 12:03:36.454849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="70.346µs"
	I0127 12:03:51.069922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-273200"
	E0127 12:03:55.538580       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:03:55.598680       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:04:25.544050       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:04:25.605617       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:04:55.550797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:04:55.614647       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:05:25.556011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:25.622182       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:05:55.561447       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:55.629576       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:25.567551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:25.637231       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:55.574123       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:55.644502       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:25.580201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:25.651930       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [46a517d2fa35e6b6974af855edd8ab558d86b8b0ec82420132bf0eef2046ea84] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:47:27.318088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:47:27.332797       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.181"]
	E0127 11:47:27.332855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:47:27.567542       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:47:27.567621       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:47:27.567651       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:47:27.571307       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:47:27.571572       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:47:27.571601       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:47:27.573017       1 config.go:199] "Starting service config controller"
	I0127 11:47:27.573057       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:47:27.573088       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:47:27.573092       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:47:27.577571       1 config.go:329] "Starting node config controller"
	I0127 11:47:27.577636       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:47:27.673736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:47:27.673809       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:47:27.681219       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c73f73301eea99ce4d4646b33afcff610c9e3390df3af1c22d0b1913ab71f962] <==
	W0127 11:47:18.844444       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 11:47:18.844507       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:18.844559       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:47:18.844591       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:18.844559       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:47:18.844883       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:18.844800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:18.844915       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:18.844988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:47:18.845024       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:19.659546       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 11:47:19.659627       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:19.709538       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:47:19.709575       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:19.897301       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:47:19.897469       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:19.924156       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:47:19.924592       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:19.994312       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:47:19.994386       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:20.043576       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:47:20.043709       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:20.130262       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:47:20.130680       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 11:47:23.336902       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:07:01 no-preload-273200 kubelet[3576]: E0127 12:07:01.790657    3576 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979621789758066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:02 no-preload-273200 kubelet[3576]: I0127 12:07:02.440068    3576 scope.go:117] "RemoveContainer" containerID="222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732"
	Jan 27 12:07:02 no-preload-273200 kubelet[3576]: E0127 12:07:02.440263    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pbtjt_kubernetes-dashboard(59f37677-c2ea-4dcb-b18a-92b5053279e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pbtjt" podUID="59f37677-c2ea-4dcb-b18a-92b5053279e2"
	Jan 27 12:07:11 no-preload-273200 kubelet[3576]: E0127 12:07:11.793318    3576 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979631792808362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:11 no-preload-273200 kubelet[3576]: E0127 12:07:11.793402    3576 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979631792808362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:15 no-preload-273200 kubelet[3576]: E0127 12:07:15.442200    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z6fn6" podUID="8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2"
	Jan 27 12:07:17 no-preload-273200 kubelet[3576]: I0127 12:07:17.440118    3576 scope.go:117] "RemoveContainer" containerID="222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732"
	Jan 27 12:07:17 no-preload-273200 kubelet[3576]: E0127 12:07:17.440424    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pbtjt_kubernetes-dashboard(59f37677-c2ea-4dcb-b18a-92b5053279e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pbtjt" podUID="59f37677-c2ea-4dcb-b18a-92b5053279e2"
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]: E0127 12:07:21.471784    3576 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]: E0127 12:07:21.795935    3576 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979641795547076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:21 no-preload-273200 kubelet[3576]: E0127 12:07:21.795976    3576 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979641795547076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:29 no-preload-273200 kubelet[3576]: E0127 12:07:29.442444    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z6fn6" podUID="8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2"
	Jan 27 12:07:31 no-preload-273200 kubelet[3576]: I0127 12:07:31.443296    3576 scope.go:117] "RemoveContainer" containerID="222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732"
	Jan 27 12:07:31 no-preload-273200 kubelet[3576]: E0127 12:07:31.443937    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pbtjt_kubernetes-dashboard(59f37677-c2ea-4dcb-b18a-92b5053279e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pbtjt" podUID="59f37677-c2ea-4dcb-b18a-92b5053279e2"
	Jan 27 12:07:31 no-preload-273200 kubelet[3576]: E0127 12:07:31.797457    3576 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979651797085834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:31 no-preload-273200 kubelet[3576]: E0127 12:07:31.797480    3576 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979651797085834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:41 no-preload-273200 kubelet[3576]: E0127 12:07:41.799200    3576 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979661798873745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:41 no-preload-273200 kubelet[3576]: E0127 12:07:41.799720    3576 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979661798873745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:07:43 no-preload-273200 kubelet[3576]: I0127 12:07:43.440921    3576 scope.go:117] "RemoveContainer" containerID="222425e838be656cbadf7362c6a2c4addd8942de7e34ad57b9ec7ea528b74732"
	Jan 27 12:07:43 no-preload-273200 kubelet[3576]: E0127 12:07:43.441109    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-pbtjt_kubernetes-dashboard(59f37677-c2ea-4dcb-b18a-92b5053279e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-pbtjt" podUID="59f37677-c2ea-4dcb-b18a-92b5053279e2"
	Jan 27 12:07:43 no-preload-273200 kubelet[3576]: E0127 12:07:43.441568    3576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z6fn6" podUID="8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2"
	
	
	==> kubernetes-dashboard [7b8aee085b459f70312909e3e39bd5e5cbaafc4e2e5280f15d1de276cb74ca3c] <==
	2025/01/27 11:55:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:56:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:56:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:57:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:57:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d28364fe38ad4002695585ed2ecb452e99ba9934cf8652247e197778856d36a6] <==
	I0127 11:47:28.916307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:47:28.938300       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:47:28.938392       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:47:28.956237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:47:28.956950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-273200_99395246-3901-4c41-98d9-3fe092aa013e!
	I0127 11:47:28.959881       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01ca7712-84fd-47a0-bff7-b50cc4dbcdc4", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-273200_99395246-3901-4c41-98d9-3fe092aa013e became leader
	I0127 11:47:29.057745       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-273200_99395246-3901-4c41-98d9-3fe092aa013e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-273200 -n no-preload-273200
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-273200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-z6fn6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-273200 describe pod metrics-server-f79f97bbb-z6fn6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-273200 describe pod metrics-server-f79f97bbb-z6fn6: exit status 1 (63.25166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-z6fn6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-273200 describe pod metrics-server-f79f97bbb-z6fn6: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1558.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1645.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-986409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-986409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (27m23.480099169s)

                                                
                                                
-- stdout --
	* [embed-certs-986409] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-986409" primary control-plane node in "embed-certs-986409" cluster
	* Restarting existing kvm2 VM for "embed-certs-986409" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-986409 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:42:12.659351   69688 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:42:12.659483   69688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:12.659494   69688 out.go:358] Setting ErrFile to fd 2...
	I0127 11:42:12.659501   69688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:42:12.659813   69688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:42:12.660413   69688 out.go:352] Setting JSON to false
	I0127 11:42:12.661425   69688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8633,"bootTime":1737969500,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:42:12.661538   69688 start.go:139] virtualization: kvm guest
	I0127 11:42:12.663656   69688 out.go:177] * [embed-certs-986409] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:42:12.665085   69688 notify.go:220] Checking for updates...
	I0127 11:42:12.665121   69688 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:42:12.666576   69688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:42:12.667872   69688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:42:12.669171   69688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:42:12.670396   69688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:42:12.671718   69688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:42:12.673194   69688 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:42:12.673592   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:12.673633   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:12.688996   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
	I0127 11:42:12.689427   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:12.689939   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:42:12.689959   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:12.690291   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:12.690497   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:12.690778   69688 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:42:12.691202   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:12.691278   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:12.705467   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35261
	I0127 11:42:12.705848   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:12.706320   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:42:12.706346   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:12.706672   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:12.706900   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:12.748739   69688 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:42:12.750042   69688 start.go:297] selected driver: kvm2
	I0127 11:42:12.750065   69688 start.go:901] validating driver "kvm2" against &{Name:embed-certs-986409 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-986409 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:12.750251   69688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:42:12.751305   69688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:12.751430   69688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:42:12.766350   69688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:42:12.766931   69688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:42:12.766972   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:42:12.767019   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:42:12.767111   69688 start.go:340] cluster config:
	{Name:embed-certs-986409 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-986409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:12.767257   69688 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:42:12.769185   69688 out.go:177] * Starting "embed-certs-986409" primary control-plane node in "embed-certs-986409" cluster
	I0127 11:42:12.770818   69688 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:42:12.770860   69688 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:42:12.770868   69688 cache.go:56] Caching tarball of preloaded images
	I0127 11:42:12.770978   69688 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:42:12.770993   69688 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 11:42:12.771126   69688 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/config.json ...
	I0127 11:42:12.771378   69688 start.go:360] acquireMachinesLock for embed-certs-986409: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:42:12.771436   69688 start.go:364] duration metric: took 32.261µs to acquireMachinesLock for "embed-certs-986409"
	I0127 11:42:12.771453   69688 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:42:12.771461   69688 fix.go:54] fixHost starting: 
	I0127 11:42:12.771931   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:42:12.771978   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:42:12.787654   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 11:42:12.788085   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:42:12.788572   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:42:12.788605   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:42:12.788931   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:42:12.789126   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:12.789248   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:42:12.790996   69688 fix.go:112] recreateIfNeeded on embed-certs-986409: state=Stopped err=<nil>
	I0127 11:42:12.791027   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	W0127 11:42:12.791156   69688 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:42:12.792919   69688 out.go:177] * Restarting existing kvm2 VM for "embed-certs-986409" ...
	I0127 11:42:12.794052   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Start
	I0127 11:42:12.794230   69688 main.go:141] libmachine: (embed-certs-986409) starting domain...
	I0127 11:42:12.794251   69688 main.go:141] libmachine: (embed-certs-986409) ensuring networks are active...
	I0127 11:42:12.795086   69688 main.go:141] libmachine: (embed-certs-986409) Ensuring network default is active
	I0127 11:42:12.795557   69688 main.go:141] libmachine: (embed-certs-986409) Ensuring network mk-embed-certs-986409 is active
	I0127 11:42:12.795977   69688 main.go:141] libmachine: (embed-certs-986409) getting domain XML...
	I0127 11:42:12.796825   69688 main.go:141] libmachine: (embed-certs-986409) creating domain...
	I0127 11:42:14.065097   69688 main.go:141] libmachine: (embed-certs-986409) waiting for IP...
	I0127 11:42:14.065877   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:14.066292   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:14.066416   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:14.066281   69723 retry.go:31] will retry after 251.556361ms: waiting for domain to come up
	I0127 11:42:14.320081   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:14.320695   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:14.320727   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:14.320674   69723 retry.go:31] will retry after 293.940821ms: waiting for domain to come up
	I0127 11:42:14.616361   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:14.616956   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:14.616998   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:14.616913   69723 retry.go:31] will retry after 416.598975ms: waiting for domain to come up
	I0127 11:42:15.035558   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:15.036146   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:15.036173   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:15.036115   69723 retry.go:31] will retry after 529.005635ms: waiting for domain to come up
	I0127 11:42:15.567119   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:15.567642   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:15.567671   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:15.567594   69723 retry.go:31] will retry after 561.691779ms: waiting for domain to come up
	I0127 11:42:16.131298   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:16.131761   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:16.131790   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:16.131732   69723 retry.go:31] will retry after 656.788473ms: waiting for domain to come up
	I0127 11:42:16.790719   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:16.791268   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:16.791297   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:16.791234   69723 retry.go:31] will retry after 1.141459805s: waiting for domain to come up
	I0127 11:42:17.934019   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:17.934669   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:17.934701   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:17.934627   69723 retry.go:31] will retry after 975.314109ms: waiting for domain to come up
	I0127 11:42:18.911975   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:18.912500   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:18.912533   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:18.912480   69723 retry.go:31] will retry after 1.802470158s: waiting for domain to come up
	I0127 11:42:20.717635   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:20.718141   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:20.718168   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:20.718109   69723 retry.go:31] will retry after 1.493028158s: waiting for domain to come up
	I0127 11:42:22.212730   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:22.213235   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:22.213285   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:22.213179   69723 retry.go:31] will retry after 2.750443183s: waiting for domain to come up
	I0127 11:42:24.964692   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:24.965165   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:24.965195   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:24.965127   69723 retry.go:31] will retry after 3.619213666s: waiting for domain to come up
	I0127 11:42:28.586304   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:28.586814   69688 main.go:141] libmachine: (embed-certs-986409) DBG | unable to find current IP address of domain embed-certs-986409 in network mk-embed-certs-986409
	I0127 11:42:28.586856   69688 main.go:141] libmachine: (embed-certs-986409) DBG | I0127 11:42:28.586769   69723 retry.go:31] will retry after 3.138682944s: waiting for domain to come up
	I0127 11:42:31.728667   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.729187   69688 main.go:141] libmachine: (embed-certs-986409) found domain IP: 192.168.72.29
	I0127 11:42:31.729206   69688 main.go:141] libmachine: (embed-certs-986409) reserving static IP address...
	I0127 11:42:31.729227   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has current primary IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.729754   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "embed-certs-986409", mac: "52:54:00:59:d5:0d", ip: "192.168.72.29"} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:31.729785   69688 main.go:141] libmachine: (embed-certs-986409) DBG | skip adding static IP to network mk-embed-certs-986409 - found existing host DHCP lease matching {name: "embed-certs-986409", mac: "52:54:00:59:d5:0d", ip: "192.168.72.29"}
	I0127 11:42:31.729797   69688 main.go:141] libmachine: (embed-certs-986409) reserved static IP address 192.168.72.29 for domain embed-certs-986409
	I0127 11:42:31.729814   69688 main.go:141] libmachine: (embed-certs-986409) waiting for SSH...
	I0127 11:42:31.729825   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Getting to WaitForSSH function...
	I0127 11:42:31.732344   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.732739   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:31.732769   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.732924   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Using SSH client type: external
	I0127 11:42:31.732965   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa (-rw-------)
	I0127 11:42:31.732996   69688 main.go:141] libmachine: (embed-certs-986409) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:42:31.733010   69688 main.go:141] libmachine: (embed-certs-986409) DBG | About to run SSH command:
	I0127 11:42:31.733026   69688 main.go:141] libmachine: (embed-certs-986409) DBG | exit 0
	I0127 11:42:31.871287   69688 main.go:141] libmachine: (embed-certs-986409) DBG | SSH cmd err, output: <nil>: 
	I0127 11:42:31.871636   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetConfigRaw
	I0127 11:42:31.872329   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetIP
	I0127 11:42:31.876570   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.876971   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:31.877004   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.877260   69688 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/config.json ...
	I0127 11:42:31.877504   69688 machine.go:93] provisionDockerMachine start ...
	I0127 11:42:31.877525   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:31.877712   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:31.879777   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.880039   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:31.880067   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:31.880253   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:31.880448   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:31.880597   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:31.880728   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:31.880895   69688 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:31.881136   69688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0127 11:42:31.881152   69688 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:42:32.000043   69688 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:42:32.000081   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetMachineName
	I0127 11:42:32.000305   69688 buildroot.go:166] provisioning hostname "embed-certs-986409"
	I0127 11:42:32.000333   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetMachineName
	I0127 11:42:32.000508   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:32.004380   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.004831   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.004859   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.004969   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:32.005160   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.005332   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.005535   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:32.005705   69688 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:32.005892   69688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0127 11:42:32.005908   69688 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-986409 && echo "embed-certs-986409" | sudo tee /etc/hostname
	I0127 11:42:32.145349   69688 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-986409
	
	I0127 11:42:32.145377   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:32.148279   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.148690   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.148722   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.148850   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:32.149035   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.149194   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.149361   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:32.149498   69688 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:32.149659   69688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0127 11:42:32.149674   69688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-986409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-986409/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-986409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:42:32.277051   69688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:42:32.277081   69688 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:42:32.277099   69688 buildroot.go:174] setting up certificates
	I0127 11:42:32.277107   69688 provision.go:84] configureAuth start
	I0127 11:42:32.277120   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetMachineName
	I0127 11:42:32.277382   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetIP
	I0127 11:42:32.280634   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.281048   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.281085   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.281270   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:32.283595   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.283979   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.284030   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.284157   69688 provision.go:143] copyHostCerts
	I0127 11:42:32.284217   69688 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:42:32.284238   69688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:42:32.284313   69688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:42:32.284420   69688 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:42:32.284432   69688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:42:32.284462   69688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:42:32.284537   69688 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:42:32.284547   69688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:42:32.284578   69688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:42:32.284643   69688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.embed-certs-986409 san=[127.0.0.1 192.168.72.29 embed-certs-986409 localhost minikube]
	I0127 11:42:32.478732   69688 provision.go:177] copyRemoteCerts
	I0127 11:42:32.478781   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:42:32.478803   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:32.481487   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.481845   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.481883   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.482044   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:32.482239   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.482389   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:32.482525   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:42:32.575112   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:42:32.612780   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:42:32.643810   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 11:42:32.673696   69688 provision.go:87] duration metric: took 396.571216ms to configureAuth
	I0127 11:42:32.673721   69688 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:42:32.673869   69688 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:42:32.673934   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:32.676487   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.676837   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.676865   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.677074   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:32.677281   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.677457   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.677639   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:32.677836   69688 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:32.678047   69688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0127 11:42:32.678072   69688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:42:32.909759   69688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:42:32.909791   69688 machine.go:96] duration metric: took 1.032270922s to provisionDockerMachine
	I0127 11:42:32.909805   69688 start.go:293] postStartSetup for "embed-certs-986409" (driver="kvm2")
	I0127 11:42:32.909820   69688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:42:32.909850   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:32.910156   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:42:32.910219   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:32.913332   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.913731   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:32.913769   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:32.913933   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:32.914111   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:32.914269   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:32.914427   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:42:33.003810   69688 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:42:33.008201   69688 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:42:33.008224   69688 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:42:33.008294   69688 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:42:33.008397   69688 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:42:33.008512   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:42:33.018450   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:42:33.041329   69688 start.go:296] duration metric: took 131.511219ms for postStartSetup
	I0127 11:42:33.041366   69688 fix.go:56] duration metric: took 20.269907014s for fixHost
	I0127 11:42:33.041387   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:33.044157   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.044471   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:33.044501   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.044671   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:33.044842   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:33.045086   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:33.045217   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:33.045393   69688 main.go:141] libmachine: Using SSH client type: native
	I0127 11:42:33.045564   69688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0127 11:42:33.045575   69688 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:42:33.159988   69688 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978153.135843628
	
	I0127 11:42:33.160018   69688 fix.go:216] guest clock: 1737978153.135843628
	I0127 11:42:33.160029   69688 fix.go:229] Guest: 2025-01-27 11:42:33.135843628 +0000 UTC Remote: 2025-01-27 11:42:33.041369835 +0000 UTC m=+20.419337794 (delta=94.473793ms)
	I0127 11:42:33.160056   69688 fix.go:200] guest clock delta is within tolerance: 94.473793ms
	I0127 11:42:33.160063   69688 start.go:83] releasing machines lock for "embed-certs-986409", held for 20.388617209s
	I0127 11:42:33.160090   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:33.160335   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetIP
	I0127 11:42:33.163290   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.163716   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:33.163746   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.163899   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:33.164352   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:33.164546   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:42:33.164647   69688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:42:33.164711   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:33.164737   69688 ssh_runner.go:195] Run: cat /version.json
	I0127 11:42:33.164756   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:42:33.167684   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.167708   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.168061   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:33.168091   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.168118   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:33.168139   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:33.168276   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:33.168391   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:42:33.168504   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:33.168604   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:33.168612   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:42:33.168775   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:42:33.168778   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:42:33.168920   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:42:33.276772   69688 ssh_runner.go:195] Run: systemctl --version
	I0127 11:42:33.282269   69688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:42:33.427963   69688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:42:33.434740   69688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:42:33.434800   69688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:42:33.450725   69688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:42:33.450749   69688 start.go:495] detecting cgroup driver to use...
	I0127 11:42:33.450820   69688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:42:33.466482   69688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:42:33.479943   69688 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:42:33.480007   69688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:42:33.493140   69688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:42:33.506033   69688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:42:33.614666   69688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:42:33.760949   69688 docker.go:233] disabling docker service ...
	I0127 11:42:33.761013   69688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:42:33.774578   69688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:42:33.786925   69688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:42:33.940214   69688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:42:34.066834   69688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:42:34.079574   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:42:34.096460   69688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:42:34.096515   69688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.106112   69688 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:42:34.106173   69688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.115769   69688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.125166   69688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.134516   69688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:42:34.144353   69688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.158006   69688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.177824   69688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:42:34.191865   69688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:42:34.202678   69688 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:42:34.202748   69688 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:42:34.221667   69688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:42:34.231909   69688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:42:34.352007   69688 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:42:34.449223   69688 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:42:34.449293   69688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:42:34.454017   69688 start.go:563] Will wait 60s for crictl version
	I0127 11:42:34.454077   69688 ssh_runner.go:195] Run: which crictl
	I0127 11:42:34.457860   69688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:42:34.512688   69688 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:42:34.512780   69688 ssh_runner.go:195] Run: crio --version
	I0127 11:42:34.544134   69688 ssh_runner.go:195] Run: crio --version
	I0127 11:42:34.577282   69688 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:42:34.578761   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetIP
	I0127 11:42:34.581636   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:34.582012   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:42:34.582047   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:42:34.582271   69688 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 11:42:34.586570   69688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:42:34.600153   69688 kubeadm.go:883] updating cluster {Name:embed-certs-986409 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-986409 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:42:34.600277   69688 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:42:34.600323   69688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:42:34.644495   69688 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 11:42:34.644582   69688 ssh_runner.go:195] Run: which lz4
	I0127 11:42:34.648471   69688 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:42:34.652324   69688 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:42:34.652351   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 11:42:35.880986   69688 crio.go:462] duration metric: took 1.232547554s to copy over tarball
	I0127 11:42:35.881064   69688 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:42:37.982000   69688 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100908995s)
	I0127 11:42:37.982036   69688 crio.go:469] duration metric: took 2.101022003s to extract the tarball
	I0127 11:42:37.982043   69688 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:42:38.018136   69688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:42:38.057197   69688 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:42:38.057218   69688 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:42:38.057225   69688 kubeadm.go:934] updating node { 192.168.72.29 8443 v1.32.1 crio true true} ...
	I0127 11:42:38.057329   69688 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-986409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-986409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:42:38.057416   69688 ssh_runner.go:195] Run: crio config
	I0127 11:42:38.104140   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:42:38.104159   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:42:38.104170   69688 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:42:38.104195   69688 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-986409 NodeName:embed-certs-986409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:42:38.104336   69688 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-986409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.29"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:42:38.104415   69688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:42:38.113736   69688 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:42:38.113785   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:42:38.122865   69688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 11:42:38.137895   69688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:42:38.152660   69688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0127 11:42:38.167851   69688 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0127 11:42:38.171190   69688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:42:38.182085   69688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:42:38.297343   69688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:42:38.313886   69688 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409 for IP: 192.168.72.29
	I0127 11:42:38.313911   69688 certs.go:194] generating shared ca certs ...
	I0127 11:42:38.313931   69688 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:42:38.314081   69688 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:42:38.314129   69688 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:42:38.314148   69688 certs.go:256] generating profile certs ...
	I0127 11:42:38.314257   69688 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/client.key
	I0127 11:42:38.314350   69688 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/apiserver.key.fe0d25b6
	I0127 11:42:38.314391   69688 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/proxy-client.key
	I0127 11:42:38.314539   69688 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:42:38.314587   69688 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:42:38.314602   69688 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:42:38.314634   69688 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:42:38.314668   69688 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:42:38.314704   69688 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:42:38.314765   69688 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:42:38.315554   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:42:38.349750   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:42:38.380641   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:42:38.405679   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:42:38.433824   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 11:42:38.463684   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:42:38.487131   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:42:38.512103   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/embed-certs-986409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:42:38.536651   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:42:38.558134   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:42:38.578919   69688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:42:38.599870   69688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:42:38.615254   69688 ssh_runner.go:195] Run: openssl version
	I0127 11:42:38.620494   69688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:42:38.630231   69688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:42:38.634162   69688 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:42:38.634214   69688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:42:38.639329   69688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:42:38.649442   69688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:42:38.659322   69688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:42:38.663380   69688 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:42:38.663417   69688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:42:38.668528   69688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:42:38.678277   69688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:42:38.688179   69688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:38.692057   69688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:38.692100   69688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:42:38.697239   69688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:42:38.707027   69688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:42:38.711039   69688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:42:38.716446   69688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:42:38.721789   69688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:42:38.727044   69688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:42:38.732225   69688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:42:38.737696   69688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:42:38.743229   69688 kubeadm.go:392] StartCluster: {Name:embed-certs-986409 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-986409 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:42:38.743328   69688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:42:38.743364   69688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:42:38.778011   69688 cri.go:89] found id: ""
	I0127 11:42:38.778064   69688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:42:38.787511   69688 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:42:38.787532   69688 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:42:38.787570   69688 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:42:38.796505   69688 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:42:38.797148   69688 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-986409" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:42:38.797433   69688 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-986409" cluster setting kubeconfig missing "embed-certs-986409" context setting]
	I0127 11:42:38.798044   69688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:42:38.799296   69688 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:42:38.808072   69688 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.29
	I0127 11:42:38.808103   69688 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:42:38.808114   69688 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:42:38.808163   69688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:42:38.846053   69688 cri.go:89] found id: ""
	I0127 11:42:38.846153   69688 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:42:38.862555   69688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:42:38.872104   69688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:42:38.872124   69688 kubeadm.go:157] found existing configuration files:
	
	I0127 11:42:38.872160   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:42:38.880456   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:42:38.880527   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:42:38.889192   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:42:38.899010   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:42:38.899056   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:42:38.908954   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:42:38.918417   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:42:38.918455   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:42:38.928690   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:42:38.938390   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:42:38.938432   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:42:38.948734   69688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:42:38.958993   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:39.066183   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:39.813678   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:40.020616   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:40.076839   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:40.133049   69688 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:42:40.133132   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:40.633965   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:41.133282   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:41.633827   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:42.133182   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:42.634049   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:42:42.649098   69688 api_server.go:72] duration metric: took 2.516045133s to wait for apiserver process to appear ...
	I0127 11:42:42.649126   69688 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:42:42.649149   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:42:45.149273   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:42:45.149298   69688 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:42:45.149310   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:42:45.171694   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:42:45.171720   69688 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:42:45.649282   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:42:45.654110   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:45.654136   69688 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:46.149796   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:42:46.155226   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:42:46.155251   69688 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:42:46.649933   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:42:46.655497   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0127 11:42:46.663033   69688 api_server.go:141] control plane version: v1.32.1
	I0127 11:42:46.663064   69688 api_server.go:131] duration metric: took 4.013929102s to wait for apiserver health ...
	I0127 11:42:46.663074   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:42:46.663085   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:42:46.664890   69688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:42:46.666613   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:42:46.705672   69688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:42:46.735564   69688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:42:46.748269   69688 system_pods.go:59] 8 kube-system pods found
	I0127 11:42:46.748297   69688 system_pods.go:61] "coredns-668d6bf9bc-mw592" [646bb109-71be-491d-b109-aab082466c6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:42:46.748311   69688 system_pods.go:61] "etcd-embed-certs-986409" [5e1dc5f0-ec5c-480d-a6ae-00ec599edd1c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 11:42:46.748319   69688 system_pods.go:61] "kube-apiserver-embed-certs-986409" [5367f5ba-10d6-4874-b15a-9ceb76f67438] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 11:42:46.748327   69688 system_pods.go:61] "kube-controller-manager-embed-certs-986409" [74e7b7ea-6f2c-4346-a023-b11a8dac3fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 11:42:46.748336   69688 system_pods.go:61] "kube-proxy-8p7zc" [1188ee47-52da-472d-be87-8a78374f5fea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 11:42:46.748345   69688 system_pods.go:61] "kube-scheduler-embed-certs-986409" [aa542a9b-a473-4dda-88f6-686d847ec459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:42:46.748360   69688 system_pods.go:61] "metrics-server-f79f97bbb-8rmt5" [b02fb04f-47dc-4104-82a9-f3a68851f1e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:42:46.748376   69688 system_pods.go:61] "storage-provisioner" [389174b7-4c12-4db4-ade3-b6e14c3b60f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 11:42:46.748383   69688 system_pods.go:74] duration metric: took 12.79183ms to wait for pod list to return data ...
	I0127 11:42:46.748392   69688 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:42:46.753906   69688 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:42:46.753933   69688 node_conditions.go:123] node cpu capacity is 2
	I0127 11:42:46.753944   69688 node_conditions.go:105] duration metric: took 5.547773ms to run NodePressure ...
	I0127 11:42:46.753965   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:42:47.032615   69688 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 11:42:47.036679   69688 kubeadm.go:739] kubelet initialised
	I0127 11:42:47.036703   69688 kubeadm.go:740] duration metric: took 4.063128ms waiting for restarted kubelet to initialise ...
	I0127 11:42:47.036713   69688 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:42:47.042710   69688 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-mw592" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:49.050252   69688 pod_ready.go:103] pod "coredns-668d6bf9bc-mw592" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:51.549372   69688 pod_ready.go:93] pod "coredns-668d6bf9bc-mw592" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:51.549400   69688 pod_ready.go:82] duration metric: took 4.506666642s for pod "coredns-668d6bf9bc-mw592" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:51.549412   69688 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:53.555660   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:55.555766   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:42:56.055507   69688 pod_ready.go:93] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:56.055529   69688 pod_ready.go:82] duration metric: took 4.506109559s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.055538   69688 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.060271   69688 pod_ready.go:93] pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:56.060294   69688 pod_ready.go:82] duration metric: took 4.750501ms for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.060312   69688 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.064163   69688 pod_ready.go:93] pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:56.064179   69688 pod_ready.go:82] duration metric: took 3.859862ms for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.064187   69688 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8p7zc" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.068010   69688 pod_ready.go:93] pod "kube-proxy-8p7zc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:56.068025   69688 pod_ready.go:82] duration metric: took 3.833464ms for pod "kube-proxy-8p7zc" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.068032   69688 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.573557   69688 pod_ready.go:93] pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:42:56.573579   69688 pod_ready.go:82] duration metric: took 505.540848ms for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:56.573588   69688 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" ...
	I0127 11:42:58.579700   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:01.078778   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:03.080097   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:05.578614   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:07.580339   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:10.079497   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:12.079920   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:14.580165   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:17.079333   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:19.080674   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:21.580078   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:24.080803   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:26.580201   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:29.079646   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:31.079780   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:33.080981   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:35.579663   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:37.580869   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:40.079450   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:43.059691   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:45.080018   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:47.579959   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:49.580402   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:52.078961   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:54.082225   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:56.579218   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:58.580358   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:01.079844   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:03.579752   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:06.079123   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:08.080620   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:10.579161   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:12.580509   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.581269   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.080972   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.581213   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.079928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.080232   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.080547   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.081045   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.579496   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.580035   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.078928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.079470   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.081149   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:41.579699   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.911112   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.080526   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.580901   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.079391   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.081988   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:55.580478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.079513   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.079905   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:02.080706   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:04.579620   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.079240   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:09.079806   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.081218   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:13.579850   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.580199   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:18.080468   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:20.579930   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.580421   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:25.080118   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:27.579811   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:30.079284   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:32.079751   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:34.580737   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:37.080749   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:39.579328   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.580544   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.079255   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:46.079779   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:48.080162   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.579311   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.580182   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.580408   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:57.079629   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.080820   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.580837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.581478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.079382   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.079678   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.079837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:12.580869   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.080714   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:17.580030   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:19.580896   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.079867   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:24.580025   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.079771   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.580416   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.080512   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.579348   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.579626   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:39.080089   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:41.080522   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.080947   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.579654   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:48.080040   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.580505   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.580590   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:55.079977   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:56.573914   69688 pod_ready.go:82] duration metric: took 4m0.000313005s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:56.573939   69688 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:56.573958   69688 pod_ready.go:39] duration metric: took 4m9.537234596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:56.573984   69688 kubeadm.go:597] duration metric: took 4m17.786447343s to restartPrimaryControlPlane
	W0127 11:46:56.574055   69688 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:56.574078   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:47:24.171505   69688 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.597391159s)
	I0127 11:47:24.171597   69688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:24.187337   69688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:24.197062   69688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:24.208102   69688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:24.208127   69688 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:24.208176   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:24.223247   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:24.223306   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:24.232903   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:24.241163   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:24.241220   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:24.251669   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.260475   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:24.260534   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.269272   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:24.277509   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:24.277554   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:24.286253   69688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:24.435312   69688 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:32.285356   69688 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:32.285447   69688 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:32.285583   69688 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:32.285722   69688 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:32.285858   69688 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:32.285955   69688 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:32.287165   69688 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:32.287240   69688 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:32.287301   69688 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:32.287411   69688 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:32.287505   69688 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:32.287574   69688 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:32.287659   69688 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:32.287773   69688 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:32.287869   69688 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:32.287947   69688 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:32.288020   69688 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:32.288054   69688 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:32.288102   69688 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:32.288149   69688 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:32.288202   69688 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:32.288265   69688 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:32.288341   69688 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:32.288412   69688 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:32.288506   69688 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:32.288612   69688 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:32.290658   69688 out.go:235]   - Booting up control plane ...
	I0127 11:47:32.290754   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:32.290861   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:32.290938   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:32.291060   69688 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:32.291188   69688 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:32.291240   69688 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:32.291426   69688 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:32.291585   69688 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:32.291703   69688 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.921879ms
	I0127 11:47:32.291805   69688 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:32.291896   69688 kubeadm.go:310] [api-check] The API server is healthy after 5.007975802s
	I0127 11:47:32.292039   69688 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:32.292235   69688 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:32.292322   69688 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:32.292582   69688 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-986409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:32.292672   69688 kubeadm.go:310] [bootstrap-token] Using token: qkdn31.mmb2k0rafw3oyd5r
	I0127 11:47:32.293870   69688 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:32.294001   69688 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:32.294069   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:32.294179   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:32.294287   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:32.294412   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:32.294512   69688 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:32.294620   69688 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:32.294658   69688 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:32.294697   69688 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:32.294704   69688 kubeadm.go:310] 
	I0127 11:47:32.294752   69688 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:32.294759   69688 kubeadm.go:310] 
	I0127 11:47:32.294824   69688 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:32.294834   69688 kubeadm.go:310] 
	I0127 11:47:32.294869   69688 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:32.294927   69688 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:32.294970   69688 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:32.294976   69688 kubeadm.go:310] 
	I0127 11:47:32.295034   69688 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:32.295040   69688 kubeadm.go:310] 
	I0127 11:47:32.295078   69688 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:32.295084   69688 kubeadm.go:310] 
	I0127 11:47:32.295129   69688 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:32.295218   69688 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:32.295321   69688 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:32.295333   69688 kubeadm.go:310] 
	I0127 11:47:32.295447   69688 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:32.295574   69688 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:32.295586   69688 kubeadm.go:310] 
	I0127 11:47:32.295723   69688 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.295861   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:32.295885   69688 kubeadm.go:310] 	--control-plane 
	I0127 11:47:32.295888   69688 kubeadm.go:310] 
	I0127 11:47:32.295957   69688 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:32.295963   69688 kubeadm.go:310] 
	I0127 11:47:32.296089   69688 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.296217   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:32.296242   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:47:32.296252   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:32.297821   69688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:32.299024   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:32.311774   69688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:32.333154   69688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:32.333250   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:32.333317   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-986409 minikube.k8s.io/updated_at=2025_01_27T11_47_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=embed-certs-986409 minikube.k8s.io/primary=true
	I0127 11:47:32.373901   69688 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:32.614706   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:33.115242   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:33.614855   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.114947   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.615735   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.114787   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.615277   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.708075   69688 kubeadm.go:1113] duration metric: took 3.374895681s to wait for elevateKubeSystemPrivileges
	I0127 11:47:35.708110   69688 kubeadm.go:394] duration metric: took 4m56.964886498s to StartCluster
	I0127 11:47:35.708127   69688 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.708206   69688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:35.709765   69688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.710017   69688 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:35.710099   69688 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:35.710197   69688 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-986409"
	I0127 11:47:35.710208   69688 addons.go:69] Setting default-storageclass=true in profile "embed-certs-986409"
	I0127 11:47:35.710224   69688 addons.go:69] Setting dashboard=true in profile "embed-certs-986409"
	I0127 11:47:35.710231   69688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-986409"
	I0127 11:47:35.710234   69688 addons.go:238] Setting addon dashboard=true in "embed-certs-986409"
	I0127 11:47:35.710215   69688 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-986409"
	W0127 11:47:35.710294   69688 addons.go:247] addon storage-provisioner should already be in state true
	W0127 11:47:35.710246   69688 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:35.710361   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.710231   69688 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:35.710232   69688 addons.go:69] Setting metrics-server=true in profile "embed-certs-986409"
	I0127 11:47:35.710835   69688 addons.go:238] Setting addon metrics-server=true in "embed-certs-986409"
	W0127 11:47:35.710848   69688 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:35.710878   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.711284   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711319   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711356   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711379   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711948   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.712418   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.712548   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.713403   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.713472   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.719688   69688 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:35.721496   69688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:35.730986   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
	I0127 11:47:35.731485   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.731589   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0127 11:47:35.731973   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.731990   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732030   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732378   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.732610   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32785
	I0127 11:47:35.732868   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.732886   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732943   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732985   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733025   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733170   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.733387   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.733408   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.733574   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733609   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733744   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.734292   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.734315   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.739242   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0127 11:47:35.739695   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.740240   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.740254   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.740603   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.740797   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.744403   69688 addons.go:238] Setting addon default-storageclass=true in "embed-certs-986409"
	W0127 11:47:35.744426   69688 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:35.744451   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.744823   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.744854   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.756768   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0127 11:47:35.757189   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.757717   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.757742   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.758231   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.758430   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.760526   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.762154   69688 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:35.763484   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:35.763499   69688 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:35.763517   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.766471   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.766836   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.766859   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.767027   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.767162   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.767269   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.767362   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.768736   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0127 11:47:35.769217   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.769830   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.769845   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.770259   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.770842   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.770876   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.773590   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0127 11:47:35.774146   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.774722   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.774738   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.774800   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0127 11:47:35.775433   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.775595   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.775820   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.776093   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.776103   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.776797   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.777045   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.777670   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.778791   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.779433   69688 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:35.780791   69688 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:35.782335   69688 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:35.782468   69688 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:35.782484   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:35.782515   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.783769   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:35.783786   69688 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:35.783877   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.786270   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786826   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.786854   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786891   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787046   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787077   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787232   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.787378   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.787671   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.787689   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787707   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787860   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787992   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.788077   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.793305   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0127 11:47:35.793811   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.794453   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.794473   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.794772   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.795062   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.796950   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.797253   69688 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:35.797272   69688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:35.797291   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.800329   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800750   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.800775   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800948   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.801144   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.801274   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.801417   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.954346   69688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:35.990894   69688 node_ready.go:35] waiting up to 6m0s for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021695   69688 node_ready.go:49] node "embed-certs-986409" has status "Ready":"True"
	I0127 11:47:36.021724   69688 node_ready.go:38] duration metric: took 30.797887ms for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021737   69688 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:36.029373   69688 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.075684   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:36.075765   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:36.118613   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:36.128091   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:36.128117   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:36.143161   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:36.143196   69688 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:36.167151   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:36.195969   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:36.196003   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:36.215973   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.216001   69688 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:36.279892   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:36.279930   69688 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:36.302557   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.356672   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:36.356705   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:36.403728   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:36.403755   69688 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:36.490122   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:36.490161   69688 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:36.572014   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:36.572085   69688 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:36.666239   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:36.666266   69688 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:36.784627   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:36.784652   69688 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:36.874981   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:37.244603   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077408875s)
	I0127 11:47:37.244729   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244748   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.244744   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.126101345s)
	I0127 11:47:37.244768   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244778   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246690   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246704   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246699   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246729   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246739   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246747   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246781   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246794   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246804   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246812   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.247222   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247287   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247352   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.247364   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.248606   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.248624   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281282   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.281317   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.281631   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.281653   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281654   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.980174   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.677566724s)
	I0127 11:47:37.980228   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980244   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980560   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980582   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980592   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980601   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980880   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.980939   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980966   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980987   69688 addons.go:479] Verifying addon metrics-server=true in "embed-certs-986409"
	I0127 11:47:38.056288   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:38.999682   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.124629898s)
	I0127 11:47:38.999752   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:38.999775   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000135   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000179   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.000185   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000205   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:39.000220   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000492   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000493   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000507   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.002275   69688 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-986409 addons enable metrics-server
	
	I0127 11:47:39.003930   69688 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:39.005168   69688 addons.go:514] duration metric: took 3.295073777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:40.536239   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:41.539907   69688 pod_ready.go:93] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:41.539938   69688 pod_ready.go:82] duration metric: took 5.510539517s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:41.539950   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046422   69688 pod_ready.go:93] pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.046450   69688 pod_ready.go:82] duration metric: took 506.490576ms for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046464   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.056999   69688 pod_ready.go:93] pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.057022   69688 pod_ready.go:82] duration metric: took 10.550413ms for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.057033   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066831   69688 pod_ready.go:93] pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.066859   69688 pod_ready.go:82] duration metric: took 9.817042ms for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066869   69688 pod_ready.go:39] duration metric: took 6.045119057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:42.066885   69688 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:42.066943   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.106914   69688 api_server.go:72] duration metric: took 6.396863225s to wait for apiserver process to appear ...
	I0127 11:47:42.106942   69688 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:42.106967   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:47:42.115128   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0127 11:47:42.116724   69688 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:42.116746   69688 api_server.go:131] duration metric: took 9.796211ms to wait for apiserver health ...
	I0127 11:47:42.116753   69688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:42.123449   69688 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:42.123472   69688 system_pods.go:61] "coredns-668d6bf9bc-9sk5f" [c6114990-b336-472e-8720-1ef5ccd3b001] Running
	I0127 11:47:42.123479   69688 system_pods.go:61] "coredns-668d6bf9bc-jvx66" [7eab12a3-7303-43fc-84fa-034ced59689b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:47:42.123486   69688 system_pods.go:61] "etcd-embed-certs-986409" [ebdc15ff-c173-440b-ae1a-c0bc983c015b] Running
	I0127 11:47:42.123491   69688 system_pods.go:61] "kube-apiserver-embed-certs-986409" [3cbf2980-e1b2-4cff-8d01-ab9ec4806976] Running
	I0127 11:47:42.123496   69688 system_pods.go:61] "kube-controller-manager-embed-certs-986409" [642b9798-c605-4987-9d0d-2481f451d943] Running
	I0127 11:47:42.123503   69688 system_pods.go:61] "kube-proxy-b82rc" [08412bee-7381-4d81-bb67-fb39fefc29bb] Running
	I0127 11:47:42.123508   69688 system_pods.go:61] "kube-scheduler-embed-certs-986409" [7774826a-ca31-4662-94db-76f6ccbf07c3] Running
	I0127 11:47:42.123516   69688 system_pods.go:61] "metrics-server-f79f97bbb-pjkmz" [4828c28f-5ef4-48ea-9360-151007c2d9be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:42.123522   69688 system_pods.go:61] "storage-provisioner" [df18a80b-cc75-49f1-bd1a-48bab4776d25] Running
	I0127 11:47:42.123530   69688 system_pods.go:74] duration metric: took 6.771018ms to wait for pod list to return data ...
	I0127 11:47:42.123541   69688 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:42.127202   69688 default_sa.go:45] found service account: "default"
	I0127 11:47:42.127219   69688 default_sa.go:55] duration metric: took 3.6724ms for default service account to be created ...
	I0127 11:47:42.127227   69688 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:42.139808   69688 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-986409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-986409 -n embed-certs-986409
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-986409 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-986409 logs -n 25: (1.322664341s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo journalctl                       | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo docker                           | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo                                  | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo containerd                       | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo find                             | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo crio                             | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-673007                                       | auto-673007   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	| start   | -p calico-673007 --memory=3072                       | calico-673007 | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:09:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:09:35.995394   79377 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:09:35.995968   79377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:35.995988   79377 out.go:358] Setting ErrFile to fd 2...
	I0127 12:09:35.995996   79377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:35.996432   79377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 12:09:35.997306   79377 out.go:352] Setting JSON to false
	I0127 12:09:35.998655   79377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10276,"bootTime":1737969500,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:09:35.998806   79377 start.go:139] virtualization: kvm guest
	I0127 12:09:36.001356   79377 out.go:177] * [calico-673007] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:09:36.002955   79377 notify.go:220] Checking for updates...
	I0127 12:09:36.003039   79377 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 12:09:36.004797   79377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:09:36.006545   79377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 12:09:36.008112   79377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 12:09:36.009554   79377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:09:36.010982   79377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:09:36.013068   79377 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:36.013161   79377 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:36.013237   79377 config.go:182] Loaded profile config "kindnet-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:36.013328   79377 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:09:36.052876   79377 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:09:36.054516   79377 start.go:297] selected driver: kvm2
	I0127 12:09:36.054533   79377 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:09:36.054544   79377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:09:36.055322   79377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:36.055399   79377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:09:36.072057   79377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:09:36.072119   79377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:09:36.072400   79377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:09:36.072436   79377 cni.go:84] Creating CNI manager for "calico"
	I0127 12:09:36.072444   79377 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0127 12:09:36.072504   79377 start.go:340] cluster config:
	{Name:calico-673007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-673007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:09:36.072618   79377 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:36.074700   79377 out.go:177] * Starting "calico-673007" primary control-plane node in "calico-673007" cluster
	
	
	==> CRI-O <==
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.804492118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979776804473536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7ecf1d4-8a99-45bc-8d35-4dae572e4620 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.804969180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9095d280-08ca-4507-86b2-1bca45cec420 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.805019404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9095d280-08ca-4507-86b2-1bca45cec420 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.805230480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d,PodSandboxId:b2f001e415b9d3e84a028a5cf293d586edd838f606fa85444c1d8b22e02adc57,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979726665717722,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-5n7kn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f303c5bf-fd09-4021-a854-50341e582743,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfaa3c7381575d59b62c74a421309679fd627b26e7151fb34c378357f18f80f5,PodSandboxId:26a03d527501b0d8b0864ee1063400fdf30fb9bb7815479237676b3f840ac859,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978465454412345,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-lb668,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 39ec21b8-52e4-48eb-a2e8-4670860564c4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dfd5591993c3cab9a9b792fcc22beeb8117b03c207bf3b882c99ec5335baf94,PodSandboxId:3715da4596d5f07d0268cb30e5de77f1ccdb5ff39e8b50c9e5291d6b1fac1afd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978458269295896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18a80b-cc75-49f1-bd1a-48bab4776d25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e7b314dc13ad12a41f35a9ffde3d5eb541b67822269a47d226f703b9a1189b,PodSandboxId:3273437095fd89f93654e5c1fb3790d17a1a8c07b90cc58f8bc45dc6ffbd52bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457865762338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9sk5f,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: c6114990-b336-472e-8720-1ef5ccd3b001,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b9888817fb5630d001e698f6af51a434b5092fa4e70e4f1bc12f4502e3080d,PodSandboxId:59a3008d97e928b6cc8185c42b0d9f270db8de3e036f1e8170b3ac0aad3700f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457813493462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jvx66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eab12a3-7303-43fc-84fa-034ced59689b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d5ff23308a221d0bfc125715b10af070d0c9a8ef60c0bbbb62bd42f0dfd7ec,PodSandboxId:6bd0c496526eee70b468a540d366485623ccbdcf3eb7609a714ac20cbc5d5025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978456893124913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b82rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08412bee-7381-4d81-bb67-fb39fefc29bb,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db043049216b8e9711ad9597bf3f558df76da2f6b63450dc7d72191286a72ea1,PodSandboxId:88b6b66dc3d0a9689fe2a25e4558b4e979d37dcd2d09d98e33ef8b4115c9543d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978446351922061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de8e949243b129684444f580974b2708ba640f1a9a8b275a48e8ae402c9de646,PodSandboxId:c6d3150535e70dbcdadcc2d08a49bb2bc06b51623a4ae66a6aab2e0826a2726e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca153
5f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978446326150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f6f0cb0ca439e1dc31fe817fd3f42d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b3bfe261b77f0ba4520f43b80e5145670721542813f3e7c55d5319750a34dd,PodSandboxId:df6752227df7d91d3352feba918bfcd55d364157ee9c3256844fa6640ed383da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978446242328079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87351d74dc5af8c3d91d7939c5b791fd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:562a81fb3a97c2c5334e71a41e5cfb033187d6cb3dd12ee9b7497524d4759fc5,PodSandboxId:8f44d67f36eef50cd0be5ae09fea0a63a26ba470a4f697d36ebb33bcd5359ea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978446200748087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2abf9c8c40e7c4507beac5cc3efdab,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942eb3eb328a5ac5904537a58b053796083e4066e416693c7233b8b2dad63aef,PodSandboxId:081e45ede512c52eb3b8ebe91cee0f1a6c2bd0e9cd4b254764a28959a22f89e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978162017222713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9095d280-08ca-4507-86b2-1bca45cec420 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.839832375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4beabf54-ad8f-4457-a5ef-fd8c428b042a name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.839901132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4beabf54-ad8f-4457-a5ef-fd8c428b042a name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.841166297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6006e9d7-b0cd-4946-815b-baf76489c75a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.841586821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979776841566950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6006e9d7-b0cd-4946-815b-baf76489c75a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.842174737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65af4cb6-595d-4d40-bd54-4daca1c295fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.842225877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65af4cb6-595d-4d40-bd54-4daca1c295fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.842473134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d,PodSandboxId:b2f001e415b9d3e84a028a5cf293d586edd838f606fa85444c1d8b22e02adc57,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979726665717722,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-5n7kn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f303c5bf-fd09-4021-a854-50341e582743,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfaa3c7381575d59b62c74a421309679fd627b26e7151fb34c378357f18f80f5,PodSandboxId:26a03d527501b0d8b0864ee1063400fdf30fb9bb7815479237676b3f840ac859,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978465454412345,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-lb668,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 39ec21b8-52e4-48eb-a2e8-4670860564c4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dfd5591993c3cab9a9b792fcc22beeb8117b03c207bf3b882c99ec5335baf94,PodSandboxId:3715da4596d5f07d0268cb30e5de77f1ccdb5ff39e8b50c9e5291d6b1fac1afd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978458269295896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18a80b-cc75-49f1-bd1a-48bab4776d25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e7b314dc13ad12a41f35a9ffde3d5eb541b67822269a47d226f703b9a1189b,PodSandboxId:3273437095fd89f93654e5c1fb3790d17a1a8c07b90cc58f8bc45dc6ffbd52bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457865762338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9sk5f,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: c6114990-b336-472e-8720-1ef5ccd3b001,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b9888817fb5630d001e698f6af51a434b5092fa4e70e4f1bc12f4502e3080d,PodSandboxId:59a3008d97e928b6cc8185c42b0d9f270db8de3e036f1e8170b3ac0aad3700f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457813493462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jvx66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eab12a3-7303-43fc-84fa-034ced59689b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d5ff23308a221d0bfc125715b10af070d0c9a8ef60c0bbbb62bd42f0dfd7ec,PodSandboxId:6bd0c496526eee70b468a540d366485623ccbdcf3eb7609a714ac20cbc5d5025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978456893124913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b82rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08412bee-7381-4d81-bb67-fb39fefc29bb,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db043049216b8e9711ad9597bf3f558df76da2f6b63450dc7d72191286a72ea1,PodSandboxId:88b6b66dc3d0a9689fe2a25e4558b4e979d37dcd2d09d98e33ef8b4115c9543d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978446351922061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de8e949243b129684444f580974b2708ba640f1a9a8b275a48e8ae402c9de646,PodSandboxId:c6d3150535e70dbcdadcc2d08a49bb2bc06b51623a4ae66a6aab2e0826a2726e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca153
5f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978446326150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f6f0cb0ca439e1dc31fe817fd3f42d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b3bfe261b77f0ba4520f43b80e5145670721542813f3e7c55d5319750a34dd,PodSandboxId:df6752227df7d91d3352feba918bfcd55d364157ee9c3256844fa6640ed383da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978446242328079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87351d74dc5af8c3d91d7939c5b791fd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:562a81fb3a97c2c5334e71a41e5cfb033187d6cb3dd12ee9b7497524d4759fc5,PodSandboxId:8f44d67f36eef50cd0be5ae09fea0a63a26ba470a4f697d36ebb33bcd5359ea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978446200748087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2abf9c8c40e7c4507beac5cc3efdab,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942eb3eb328a5ac5904537a58b053796083e4066e416693c7233b8b2dad63aef,PodSandboxId:081e45ede512c52eb3b8ebe91cee0f1a6c2bd0e9cd4b254764a28959a22f89e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978162017222713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65af4cb6-595d-4d40-bd54-4daca1c295fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.873812905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5135ae6-c494-4ec7-b19e-70ef2e7e0973 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.873896206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5135ae6-c494-4ec7-b19e-70ef2e7e0973 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.875303296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76f3eae9-35f2-497a-85bd-495951e3da22 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.876016765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979776875991429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76f3eae9-35f2-497a-85bd-495951e3da22 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.876506704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74be908e-0c22-49b7-9dae-3d7f1cbc17e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.876561222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74be908e-0c22-49b7-9dae-3d7f1cbc17e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.877269952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d,PodSandboxId:b2f001e415b9d3e84a028a5cf293d586edd838f606fa85444c1d8b22e02adc57,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979726665717722,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-5n7kn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f303c5bf-fd09-4021-a854-50341e582743,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfaa3c7381575d59b62c74a421309679fd627b26e7151fb34c378357f18f80f5,PodSandboxId:26a03d527501b0d8b0864ee1063400fdf30fb9bb7815479237676b3f840ac859,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978465454412345,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-lb668,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 39ec21b8-52e4-48eb-a2e8-4670860564c4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dfd5591993c3cab9a9b792fcc22beeb8117b03c207bf3b882c99ec5335baf94,PodSandboxId:3715da4596d5f07d0268cb30e5de77f1ccdb5ff39e8b50c9e5291d6b1fac1afd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978458269295896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18a80b-cc75-49f1-bd1a-48bab4776d25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e7b314dc13ad12a41f35a9ffde3d5eb541b67822269a47d226f703b9a1189b,PodSandboxId:3273437095fd89f93654e5c1fb3790d17a1a8c07b90cc58f8bc45dc6ffbd52bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457865762338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9sk5f,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: c6114990-b336-472e-8720-1ef5ccd3b001,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b9888817fb5630d001e698f6af51a434b5092fa4e70e4f1bc12f4502e3080d,PodSandboxId:59a3008d97e928b6cc8185c42b0d9f270db8de3e036f1e8170b3ac0aad3700f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457813493462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jvx66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eab12a3-7303-43fc-84fa-034ced59689b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d5ff23308a221d0bfc125715b10af070d0c9a8ef60c0bbbb62bd42f0dfd7ec,PodSandboxId:6bd0c496526eee70b468a540d366485623ccbdcf3eb7609a714ac20cbc5d5025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978456893124913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b82rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08412bee-7381-4d81-bb67-fb39fefc29bb,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db043049216b8e9711ad9597bf3f558df76da2f6b63450dc7d72191286a72ea1,PodSandboxId:88b6b66dc3d0a9689fe2a25e4558b4e979d37dcd2d09d98e33ef8b4115c9543d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978446351922061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de8e949243b129684444f580974b2708ba640f1a9a8b275a48e8ae402c9de646,PodSandboxId:c6d3150535e70dbcdadcc2d08a49bb2bc06b51623a4ae66a6aab2e0826a2726e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca153
5f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978446326150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f6f0cb0ca439e1dc31fe817fd3f42d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b3bfe261b77f0ba4520f43b80e5145670721542813f3e7c55d5319750a34dd,PodSandboxId:df6752227df7d91d3352feba918bfcd55d364157ee9c3256844fa6640ed383da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978446242328079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87351d74dc5af8c3d91d7939c5b791fd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:562a81fb3a97c2c5334e71a41e5cfb033187d6cb3dd12ee9b7497524d4759fc5,PodSandboxId:8f44d67f36eef50cd0be5ae09fea0a63a26ba470a4f697d36ebb33bcd5359ea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978446200748087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2abf9c8c40e7c4507beac5cc3efdab,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942eb3eb328a5ac5904537a58b053796083e4066e416693c7233b8b2dad63aef,PodSandboxId:081e45ede512c52eb3b8ebe91cee0f1a6c2bd0e9cd4b254764a28959a22f89e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978162017222713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74be908e-0c22-49b7-9dae-3d7f1cbc17e7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.916757531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d794886-3c6e-4733-854d-dfcd4194ebd9 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.916844093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d794886-3c6e-4733-854d-dfcd4194ebd9 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.917807936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bed93ea3-e7cd-4e83-93b2-e4a380ce14ef name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.918249922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979776918228498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bed93ea3-e7cd-4e83-93b2-e4a380ce14ef name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.918742658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eb1ada7-858c-4de9-8318-948e033ca229 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.918837966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eb1ada7-858c-4de9-8318-948e033ca229 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:36 embed-certs-986409 crio[724]: time="2025-01-27 12:09:36.919066099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d,PodSandboxId:b2f001e415b9d3e84a028a5cf293d586edd838f606fa85444c1d8b22e02adc57,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979726665717722,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-5n7kn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f303c5bf-fd09-4021-a854-50341e582743,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfaa3c7381575d59b62c74a421309679fd627b26e7151fb34c378357f18f80f5,PodSandboxId:26a03d527501b0d8b0864ee1063400fdf30fb9bb7815479237676b3f840ac859,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978465454412345,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-lb668,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 39ec21b8-52e4-48eb-a2e8-4670860564c4,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dfd5591993c3cab9a9b792fcc22beeb8117b03c207bf3b882c99ec5335baf94,PodSandboxId:3715da4596d5f07d0268cb30e5de77f1ccdb5ff39e8b50c9e5291d6b1fac1afd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978458269295896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18a80b-cc75-49f1-bd1a-48bab4776d25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e7b314dc13ad12a41f35a9ffde3d5eb541b67822269a47d226f703b9a1189b,PodSandboxId:3273437095fd89f93654e5c1fb3790d17a1a8c07b90cc58f8bc45dc6ffbd52bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457865762338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9sk5f,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: c6114990-b336-472e-8720-1ef5ccd3b001,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b9888817fb5630d001e698f6af51a434b5092fa4e70e4f1bc12f4502e3080d,PodSandboxId:59a3008d97e928b6cc8185c42b0d9f270db8de3e036f1e8170b3ac0aad3700f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978457813493462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jvx66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eab12a3-7303-43fc-84fa-034ced59689b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d5ff23308a221d0bfc125715b10af070d0c9a8ef60c0bbbb62bd42f0dfd7ec,PodSandboxId:6bd0c496526eee70b468a540d366485623ccbdcf3eb7609a714ac20cbc5d5025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978456893124913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b82rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08412bee-7381-4d81-bb67-fb39fefc29bb,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db043049216b8e9711ad9597bf3f558df76da2f6b63450dc7d72191286a72ea1,PodSandboxId:88b6b66dc3d0a9689fe2a25e4558b4e979d37dcd2d09d98e33ef8b4115c9543d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978446351922061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de8e949243b129684444f580974b2708ba640f1a9a8b275a48e8ae402c9de646,PodSandboxId:c6d3150535e70dbcdadcc2d08a49bb2bc06b51623a4ae66a6aab2e0826a2726e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca153
5f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978446326150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f6f0cb0ca439e1dc31fe817fd3f42d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b3bfe261b77f0ba4520f43b80e5145670721542813f3e7c55d5319750a34dd,PodSandboxId:df6752227df7d91d3352feba918bfcd55d364157ee9c3256844fa6640ed383da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978446242328079,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87351d74dc5af8c3d91d7939c5b791fd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:562a81fb3a97c2c5334e71a41e5cfb033187d6cb3dd12ee9b7497524d4759fc5,PodSandboxId:8f44d67f36eef50cd0be5ae09fea0a63a26ba470a4f697d36ebb33bcd5359ea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978446200748087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2abf9c8c40e7c4507beac5cc3efdab,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942eb3eb328a5ac5904537a58b053796083e4066e416693c7233b8b2dad63aef,PodSandboxId:081e45ede512c52eb3b8ebe91cee0f1a6c2bd0e9cd4b254764a28959a22f89e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978162017222713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-986409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc637e4dbac1b60946ef8b8b41f9466e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eb1ada7-858c-4de9-8318-948e033ca229 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	17149994e0fc2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           50 seconds ago      Exited              dashboard-metrics-scraper   9                   b2f001e415b9d       dashboard-metrics-scraper-86c6bf9756-5n7kn
	dfaa3c7381575       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   26a03d527501b       kubernetes-dashboard-7779f9b69b-lb668
	6dfd5591993c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   3715da4596d5f       storage-provisioner
	a5e7b314dc13a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   3273437095fd8       coredns-668d6bf9bc-9sk5f
	c5b9888817fb5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   59a3008d97e92       coredns-668d6bf9bc-jvx66
	16d5ff23308a2       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           22 minutes ago      Running             kube-proxy                  0                   6bd0c496526ee       kube-proxy-b82rc
	db043049216b8       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           22 minutes ago      Running             kube-apiserver              2                   88b6b66dc3d0a       kube-apiserver-embed-certs-986409
	de8e949243b12       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           22 minutes ago      Running             etcd                        2                   c6d3150535e70       etcd-embed-certs-986409
	19b3bfe261b77       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           22 minutes ago      Running             kube-controller-manager     2                   df6752227df7d       kube-controller-manager-embed-certs-986409
	562a81fb3a97c       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           22 minutes ago      Running             kube-scheduler              2                   8f44d67f36eef       kube-scheduler-embed-certs-986409
	942eb3eb328a5       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   081e45ede512c       kube-apiserver-embed-certs-986409
	
	
	==> coredns [a5e7b314dc13ad12a41f35a9ffde3d5eb541b67822269a47d226f703b9a1189b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c5b9888817fb5630d001e698f6af51a434b5092fa4e70e4f1bc12f4502e3080d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-986409
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-986409
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=embed-certs-986409
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_47_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:47:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-986409
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:09:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:07:13 +0000   Mon, 27 Jan 2025 11:47:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:07:13 +0000   Mon, 27 Jan 2025 11:47:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:07:13 +0000   Mon, 27 Jan 2025 11:47:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:07:13 +0000   Mon, 27 Jan 2025 11:47:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.29
	  Hostname:    embed-certs-986409
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87e2d087401a4fc99064029a79bfa61c
	  System UUID:                87e2d087-401a-4fc9-9064-029a79bfa61c
	  Boot ID:                    c50c1302-4e67-4c7d-9976-f819b003e384
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9sk5f                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-668d6bf9bc-jvx66                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-embed-certs-986409                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-986409             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-986409    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-b82rc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-embed-certs-986409             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-pjkmz                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-5n7kn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-lb668         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-986409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-986409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-986409 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-986409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-986409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-986409 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-986409 event: Registered Node embed-certs-986409 in Controller
	  Normal  CIDRAssignmentFailed     22m                cidrAllocator    Node embed-certs-986409 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +4.912344] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.936427] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619572] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.550789] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.061553] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064780] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.169455] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.157562] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.287877] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +3.935813] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +1.722182] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.059082] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.520306] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.887294] kauditd_printk_skb: 87 callbacks suppressed
	[Jan27 11:47] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.873946] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +4.386875] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.670600] systemd-fstab-generator[2998]: Ignoring "noauto" option for root device
	[  +4.493070] systemd-fstab-generator[3110]: Ignoring "noauto" option for root device
	[  +0.109972] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.145854] kauditd_printk_skb: 108 callbacks suppressed
	[  +8.348901] kauditd_printk_skb: 1 callbacks suppressed
	[Jan27 11:48] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [de8e949243b129684444f580974b2708ba640f1a9a8b275a48e8ae402c9de646] <==
	{"level":"warn","ts":"2025-01-27T12:09:10.722580Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.230957ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:10.725055Z","caller":"traceutil/trace.go:171","msg":"trace[274995882] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1717; }","duration":"376.697106ms","start":"2025-01-27T12:09:10.348345Z","end":"2025-01-27T12:09:10.725042Z","steps":["trace[274995882] 'agreement among raft nodes before linearized reading'  (duration: 374.220987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:10.722862Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"447.824403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:10.725587Z","caller":"traceutil/trace.go:171","msg":"trace[894268933] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1717; }","duration":"450.571158ms","start":"2025-01-27T12:09:10.275004Z","end":"2025-01-27T12:09:10.725576Z","steps":["trace[894268933] 'agreement among raft nodes before linearized reading'  (duration: 447.841261ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:10.726009Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:09:10.274986Z","time spent":"451.006961ms","remote":"127.0.0.1:32830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T12:09:10.975166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.883741ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17428531055476039049 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:71de94a796835d88>","response":"size:41"}
	{"level":"warn","ts":"2025-01-27T12:09:11.301859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.029662ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17428531055476039050 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.29\" mod_revision:1709 > success:<request_put:<key:\"/registry/masterleases/192.168.72.29\" value_size:67 lease:8205159018621263240 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.29\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T12:09:11.301963Z","caller":"traceutil/trace.go:171","msg":"trace[1414896136] linearizableReadLoop","detail":"{readStateIndex:2002; appliedIndex:2001; }","duration":"322.86199ms","start":"2025-01-27T12:09:10.979088Z","end":"2025-01-27T12:09:11.301950Z","steps":["trace[1414896136] 'read index received'  (duration: 191.580015ms)","trace[1414896136] 'applied index is now lower than readState.Index'  (duration: 131.280544ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:09:11.302030Z","caller":"traceutil/trace.go:171","msg":"trace[2066756840] transaction","detail":"{read_only:false; response_revision:1718; number_of_response:1; }","duration":"325.827358ms","start":"2025-01-27T12:09:10.976194Z","end":"2025-01-27T12:09:11.302021Z","steps":["trace[2066756840] 'process raft request'  (duration: 194.527642ms)","trace[2066756840] 'compare'  (duration: 130.762988ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:09:11.302097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:09:10.976178Z","time spent":"325.884061ms","remote":"127.0.0.1:60896","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":119,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.72.29\" mod_revision:1709 > success:<request_put:<key:\"/registry/masterleases/192.168.72.29\" value_size:67 lease:8205159018621263240 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.29\" > >"}
	{"level":"warn","ts":"2025-01-27T12:09:11.302273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.181891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:11.302319Z","caller":"traceutil/trace.go:171","msg":"trace[720260387] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1718; }","duration":"323.242504ms","start":"2025-01-27T12:09:10.979067Z","end":"2025-01-27T12:09:11.302310Z","steps":["trace[720260387] 'agreement among raft nodes before linearized reading'  (duration: 323.149877ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:11.302347Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:09:10.979057Z","time spent":"323.282437ms","remote":"127.0.0.1:32830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T12:09:11.302489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.332011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2025-01-27T12:09:11.303091Z","caller":"traceutil/trace.go:171","msg":"trace[1539537767] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1718; }","duration":"128.953837ms","start":"2025-01-27T12:09:11.174122Z","end":"2025-01-27T12:09:11.303076Z","steps":["trace[1539537767] 'agreement among raft nodes before linearized reading'  (duration: 128.308671ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:09:11.564007Z","caller":"traceutil/trace.go:171","msg":"trace[2146102780] linearizableReadLoop","detail":"{readStateIndex:2003; appliedIndex:2002; }","duration":"253.356583ms","start":"2025-01-27T12:09:11.310635Z","end":"2025-01-27T12:09:11.563992Z","steps":["trace[2146102780] 'read index received'  (duration: 159.611527ms)","trace[2146102780] 'applied index is now lower than readState.Index'  (duration: 93.742851ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:09:11.564269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.277601ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:11.564333Z","caller":"traceutil/trace.go:171","msg":"trace[1752465812] transaction","detail":"{read_only:false; response_revision:1719; number_of_response:1; }","duration":"253.960276ms","start":"2025-01-27T12:09:11.310363Z","end":"2025-01-27T12:09:11.564323Z","steps":["trace[1752465812] 'process raft request'  (duration: 159.849056ms)","trace[1752465812] 'compare'  (duration: 93.692043ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:09:11.564415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.595697ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:11.564335Z","caller":"traceutil/trace.go:171","msg":"trace[2059478980] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1719; }","duration":"252.353636ms","start":"2025-01-27T12:09:11.311970Z","end":"2025-01-27T12:09:11.564324Z","steps":["trace[2059478980] 'agreement among raft nodes before linearized reading'  (duration: 252.252379ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:09:11.564455Z","caller":"traceutil/trace.go:171","msg":"trace[1655116492] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1719; }","duration":"216.639737ms","start":"2025-01-27T12:09:11.347808Z","end":"2025-01-27T12:09:11.564448Z","steps":["trace[1655116492] 'agreement among raft nodes before linearized reading'  (duration: 216.542046ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:11.564303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.655159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:475"}
	{"level":"info","ts":"2025-01-27T12:09:11.564557Z","caller":"traceutil/trace.go:171","msg":"trace[2133605930] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1719; }","duration":"253.932889ms","start":"2025-01-27T12:09:11.310617Z","end":"2025-01-27T12:09:11.564550Z","steps":["trace[2133605930] 'agreement among raft nodes before linearized reading'  (duration: 253.614429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:11.948846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.772493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:11.948963Z","caller":"traceutil/trace.go:171","msg":"trace[2036864629] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1719; }","duration":"273.922473ms","start":"2025-01-27T12:09:11.675030Z","end":"2025-01-27T12:09:11.948952Z","steps":["trace[2036864629] 'range keys from in-memory index tree'  (duration: 273.722298ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:09:37 up 27 min,  0 users,  load average: 0.62, 0.33, 0.25
	Linux embed-certs-986409 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [942eb3eb328a5ac5904537a58b053796083e4066e416693c7233b8b2dad63aef] <==
	W0127 11:47:21.828004       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:21.840493       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:21.852236       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:21.908990       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.019031       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.025570       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.048135       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.053687       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.053807       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.062383       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.072791       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.077128       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.109988       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.124131       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.179966       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.267470       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.267595       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.373000       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.498176       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.632382       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.684591       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.691714       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.704573       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.753394       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:47:22.784822       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [db043049216b8e9711ad9597bf3f558df76da2f6b63450dc7d72191286a72ea1] <==
	I0127 12:05:29.755338       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:05:29.755386       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:07:28.753854       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:28.753943       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:07:29.755901       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:29.756004       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:07:29.756055       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:07:29.756119       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:07:29.757242       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:07:29.757303       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:08:29.758115       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 12:08:29.758250       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:29.758342       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 12:08:29.758390       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:08:29.759582       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:08:29.759707       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [19b3bfe261b77f0ba4520f43b80e5145670721542813f3e7c55d5319750a34dd] <==
	E0127 12:05:05.482558       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:05.537634       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:05:35.489381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:35.545854       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:05.496622       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:05.553209       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:35.502858       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:35.561963       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:05.509171       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:05.568382       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:07:13.564444       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-986409"
	E0127 12:07:35.515388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:35.574710       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:08:05.522127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:05.582216       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:08:35.528395       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:35.588826       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:08:47.617225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="141.184µs"
	I0127 12:08:52.663615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="129.086µs"
	I0127 12:08:54.769699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="92.934µs"
	E0127 12:09:05.535733       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:09:05.597906       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:09:07.663073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="83.85µs"
	E0127 12:09:35.542498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:09:35.606198       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [16d5ff23308a221d0bfc125715b10af070d0c9a8ef60c0bbbb62bd42f0dfd7ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:47:37.446875       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:47:37.469012       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.29"]
	E0127 11:47:37.469213       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:47:37.787482       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:47:37.815744       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:47:37.815862       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:47:37.995924       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:47:37.996143       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:47:37.996154       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:47:38.013005       1 config.go:199] "Starting service config controller"
	I0127 11:47:38.013043       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:47:38.013067       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:47:38.013071       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:47:38.023872       1 config.go:329] "Starting node config controller"
	I0127 11:47:38.023900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:47:38.114153       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:47:38.115104       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:47:38.194586       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [562a81fb3a97c2c5334e71a41e5cfb033187d6cb3dd12ee9b7497524d4759fc5] <==
	W0127 11:47:28.773441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:47:28.776196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:28.773585       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 11:47:28.776299       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:28.773720       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:47:28.776336       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.586571       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:47:29.586679       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.619985       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:47:29.620051       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.741665       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:47:29.741727       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.757311       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:29.757421       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.788283       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:47:29.788387       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.827582       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:29.827985       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:29.913882       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:47:29.913933       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 11:47:30.014703       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 11:47:30.014812       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:47:30.041772       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:47:30.041884       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 11:47:32.264192       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:08:52 embed-certs-986409 kubelet[3004]: E0127 12:08:52.644030    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pjkmz" podUID="4828c28f-5ef4-48ea-9360-151007c2d9be"
	Jan 27 12:08:54 embed-certs-986409 kubelet[3004]: I0127 12:08:54.751443    3004 scope.go:117] "RemoveContainer" containerID="17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d"
	Jan 27 12:08:54 embed-certs-986409 kubelet[3004]: E0127 12:08:54.752334    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-5n7kn_kubernetes-dashboard(f303c5bf-fd09-4021-a854-50341e582743)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-5n7kn" podUID="f303c5bf-fd09-4021-a854-50341e582743"
	Jan 27 12:09:02 embed-certs-986409 kubelet[3004]: E0127 12:09:02.042140    3004 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979742041844278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:02 embed-certs-986409 kubelet[3004]: E0127 12:09:02.042452    3004 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979742041844278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:07 embed-certs-986409 kubelet[3004]: E0127 12:09:07.646459    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pjkmz" podUID="4828c28f-5ef4-48ea-9360-151007c2d9be"
	Jan 27 12:09:09 embed-certs-986409 kubelet[3004]: I0127 12:09:09.643765    3004 scope.go:117] "RemoveContainer" containerID="17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d"
	Jan 27 12:09:09 embed-certs-986409 kubelet[3004]: E0127 12:09:09.644006    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-5n7kn_kubernetes-dashboard(f303c5bf-fd09-4021-a854-50341e582743)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-5n7kn" podUID="f303c5bf-fd09-4021-a854-50341e582743"
	Jan 27 12:09:12 embed-certs-986409 kubelet[3004]: E0127 12:09:12.044257    3004 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979752043956007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:12 embed-certs-986409 kubelet[3004]: E0127 12:09:12.044310    3004 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979752043956007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:20 embed-certs-986409 kubelet[3004]: I0127 12:09:20.643736    3004 scope.go:117] "RemoveContainer" containerID="17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d"
	Jan 27 12:09:20 embed-certs-986409 kubelet[3004]: E0127 12:09:20.643982    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-5n7kn_kubernetes-dashboard(f303c5bf-fd09-4021-a854-50341e582743)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-5n7kn" podUID="f303c5bf-fd09-4021-a854-50341e582743"
	Jan 27 12:09:21 embed-certs-986409 kubelet[3004]: E0127 12:09:21.645051    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pjkmz" podUID="4828c28f-5ef4-48ea-9360-151007c2d9be"
	Jan 27 12:09:22 embed-certs-986409 kubelet[3004]: E0127 12:09:22.046757    3004 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979762046359729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:22 embed-certs-986409 kubelet[3004]: E0127 12:09:22.046787    3004 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979762046359729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:31 embed-certs-986409 kubelet[3004]: E0127 12:09:31.686243    3004 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:09:31 embed-certs-986409 kubelet[3004]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:09:31 embed-certs-986409 kubelet[3004]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:09:31 embed-certs-986409 kubelet[3004]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:09:31 embed-certs-986409 kubelet[3004]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:09:32 embed-certs-986409 kubelet[3004]: E0127 12:09:32.048155    3004 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979772047564916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:32 embed-certs-986409 kubelet[3004]: E0127 12:09:32.048177    3004 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979772047564916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:32 embed-certs-986409 kubelet[3004]: I0127 12:09:32.643060    3004 scope.go:117] "RemoveContainer" containerID="17149994e0fc27b62c2b83d9771516ec784b4d28f92ec4a963a72274960d7c3d"
	Jan 27 12:09:32 embed-certs-986409 kubelet[3004]: E0127 12:09:32.643278    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-5n7kn_kubernetes-dashboard(f303c5bf-fd09-4021-a854-50341e582743)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-5n7kn" podUID="f303c5bf-fd09-4021-a854-50341e582743"
	Jan 27 12:09:32 embed-certs-986409 kubelet[3004]: E0127 12:09:32.644459    3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pjkmz" podUID="4828c28f-5ef4-48ea-9360-151007c2d9be"
	
	
	==> kubernetes-dashboard [dfaa3c7381575d59b62c74a421309679fd627b26e7151fb34c378357f18f80f5] <==
	2025/01/27 11:57:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:57:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:09:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6dfd5591993c3cab9a9b792fcc22beeb8117b03c207bf3b882c99ec5335baf94] <==
	I0127 11:47:38.399045       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:47:38.412933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:47:38.413062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:47:38.437874       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:47:38.442613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-986409_20fcfb66-ffdd-44a7-b518-febbbc7f2c79!
	I0127 11:47:38.443529       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a517518-ce86-4f0a-a31c-4d74dc117a53", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-986409_20fcfb66-ffdd-44a7-b518-febbbc7f2c79 became leader
	I0127 11:47:38.543269       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-986409_20fcfb66-ffdd-44a7-b518-febbbc7f2c79!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-986409 -n embed-certs-986409
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-986409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-pjkmz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-986409 describe pod metrics-server-f79f97bbb-pjkmz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-986409 describe pod metrics-server-f79f97bbb-pjkmz: exit status 1 (61.431093ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-pjkmz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-986409 describe pod metrics-server-f79f97bbb-pjkmz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1645.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-570778 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-570778 create -f testdata/busybox.yaml: exit status 1 (50.302537ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570778" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-570778 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 6 (244.821548ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 11:42:32.517279   69952 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 6 (230.046032ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 11:42:32.749669   69981 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-570778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 11:42:34.555566   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-570778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.374874048s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-570778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-570778 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-570778 describe deploy/metrics-server -n kube-system: exit status 1 (44.951524ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570778" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-570778 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 6 (224.019289ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 11:44:13.393489   70573 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1598.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-407489 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-407489 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m35.791319139s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-407489] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-407489" primary control-plane node in "default-k8s-diff-port-407489" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-407489" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-407489 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:43:15.960313   70237 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:43:15.960415   70237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:43:15.960422   70237 out.go:358] Setting ErrFile to fd 2...
	I0127 11:43:15.960427   70237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:43:15.960605   70237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:43:15.961099   70237 out.go:352] Setting JSON to false
	I0127 11:43:15.962036   70237 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8696,"bootTime":1737969500,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:43:15.962121   70237 start.go:139] virtualization: kvm guest
	I0127 11:43:15.964305   70237 out.go:177] * [default-k8s-diff-port-407489] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:43:15.965741   70237 notify.go:220] Checking for updates...
	I0127 11:43:15.965786   70237 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:43:15.967112   70237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:43:15.968448   70237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:43:15.969898   70237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:43:15.971186   70237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:43:15.972512   70237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:43:15.974183   70237 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:43:15.974537   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:43:15.974571   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:43:15.990019   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39269
	I0127 11:43:15.990447   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:43:15.991071   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:43:15.991097   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:43:15.991459   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:43:15.991707   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:15.991986   70237 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:43:15.992442   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:43:15.992498   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:43:16.007141   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45959
	I0127 11:43:16.007641   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:43:16.008179   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:43:16.008205   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:43:16.008579   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:43:16.008716   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:16.046455   70237 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:43:16.047638   70237 start.go:297] selected driver: kvm2
	I0127 11:43:16.047653   70237 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-407489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-407489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:43:16.047769   70237 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:43:16.048510   70237 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:43:16.048601   70237 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:43:16.063722   70237 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:43:16.064093   70237 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:43:16.064125   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:43:16.064167   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:43:16.064221   70237 start.go:340] cluster config:
	{Name:default-k8s-diff-port-407489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-407489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:43:16.064316   70237 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:43:16.066060   70237 out.go:177] * Starting "default-k8s-diff-port-407489" primary control-plane node in "default-k8s-diff-port-407489" cluster
	I0127 11:43:16.067330   70237 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:43:16.067371   70237 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 11:43:16.067380   70237 cache.go:56] Caching tarball of preloaded images
	I0127 11:43:16.067438   70237 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:43:16.067447   70237 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 11:43:16.067538   70237 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/config.json ...
	I0127 11:43:16.067728   70237 start.go:360] acquireMachinesLock for default-k8s-diff-port-407489: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:43:16.067769   70237 start.go:364] duration metric: took 24.809µs to acquireMachinesLock for "default-k8s-diff-port-407489"
	I0127 11:43:16.067782   70237 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:43:16.067787   70237 fix.go:54] fixHost starting: 
	I0127 11:43:16.068065   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:43:16.068095   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:43:16.082154   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0127 11:43:16.082604   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:43:16.083091   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:43:16.083114   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:43:16.083525   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:43:16.083733   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:16.083878   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:43:16.085507   70237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-407489: state=Stopped err=<nil>
	I0127 11:43:16.085530   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	W0127 11:43:16.085697   70237 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:43:16.087642   70237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-407489" ...
	I0127 11:43:16.088876   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Start
	I0127 11:43:16.089002   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) starting domain...
	I0127 11:43:16.089021   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) ensuring networks are active...
	I0127 11:43:16.089776   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Ensuring network default is active
	I0127 11:43:16.090104   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Ensuring network mk-default-k8s-diff-port-407489 is active
	I0127 11:43:16.090566   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) getting domain XML...
	I0127 11:43:16.091254   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) creating domain...
	I0127 11:43:17.342893   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) waiting for IP...
	I0127 11:43:17.343971   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:17.344445   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:17.344586   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:17.344443   70273 retry.go:31] will retry after 190.133763ms: waiting for domain to come up
	I0127 11:43:17.535825   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:17.536317   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:17.536341   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:17.536258   70273 retry.go:31] will retry after 346.352001ms: waiting for domain to come up
	I0127 11:43:17.883654   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:17.884140   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:17.884224   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:17.884128   70273 retry.go:31] will retry after 337.368452ms: waiting for domain to come up
	I0127 11:43:18.222484   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:18.223055   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:18.223125   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:18.223014   70273 retry.go:31] will retry after 534.999266ms: waiting for domain to come up
	I0127 11:43:18.759737   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:18.760348   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:18.760391   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:18.760304   70273 retry.go:31] will retry after 706.121405ms: waiting for domain to come up
	I0127 11:43:19.468019   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:19.468575   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:19.468617   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:19.468472   70273 retry.go:31] will retry after 656.943417ms: waiting for domain to come up
	I0127 11:43:20.127433   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:20.127937   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:20.127967   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:20.127901   70273 retry.go:31] will retry after 970.704021ms: waiting for domain to come up
	I0127 11:43:21.100678   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:21.101153   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:21.101184   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:21.101126   70273 retry.go:31] will retry after 1.274056355s: waiting for domain to come up
	I0127 11:43:22.377622   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:22.378235   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:22.378267   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:22.378203   70273 retry.go:31] will retry after 1.331126391s: waiting for domain to come up
	I0127 11:43:23.711556   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:23.712042   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:23.712060   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:23.711999   70273 retry.go:31] will retry after 1.681797841s: waiting for domain to come up
	I0127 11:43:25.396064   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:25.396577   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:25.396637   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:25.396561   70273 retry.go:31] will retry after 2.263762127s: waiting for domain to come up
	I0127 11:43:27.661891   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:27.662439   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:27.662470   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:27.662380   70273 retry.go:31] will retry after 2.249121175s: waiting for domain to come up
	I0127 11:43:29.914681   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:29.915190   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | unable to find current IP address of domain default-k8s-diff-port-407489 in network mk-default-k8s-diff-port-407489
	I0127 11:43:29.915220   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | I0127 11:43:29.915124   70273 retry.go:31] will retry after 3.862336868s: waiting for domain to come up
	I0127 11:43:33.780574   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.781018   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) found domain IP: 192.168.39.69
	I0127 11:43:33.781044   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) reserving static IP address...
	I0127 11:43:33.781058   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has current primary IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.781442   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-407489", mac: "52:54:00:04:a3:a0", ip: "192.168.39.69"} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:33.781484   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | skip adding static IP to network mk-default-k8s-diff-port-407489 - found existing host DHCP lease matching {name: "default-k8s-diff-port-407489", mac: "52:54:00:04:a3:a0", ip: "192.168.39.69"}
	I0127 11:43:33.781494   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) reserved static IP address 192.168.39.69 for domain default-k8s-diff-port-407489
	I0127 11:43:33.781505   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Getting to WaitForSSH function...
	I0127 11:43:33.781514   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) waiting for SSH...
	I0127 11:43:33.783524   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.783876   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:33.783906   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.784069   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Using SSH client type: external
	I0127 11:43:33.784090   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa (-rw-------)
	I0127 11:43:33.784108   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:43:33.784149   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | About to run SSH command:
	I0127 11:43:33.784171   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | exit 0
	I0127 11:43:33.911521   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | SSH cmd err, output: <nil>: 
	I0127 11:43:33.911936   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetConfigRaw
	I0127 11:43:33.912609   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetIP
	I0127 11:43:33.915402   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.915819   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:33.915843   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.916103   70237 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/config.json ...
	I0127 11:43:33.916268   70237 machine.go:93] provisionDockerMachine start ...
	I0127 11:43:33.916284   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:33.916460   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:33.918693   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.918988   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:33.919015   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:33.919120   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:33.919372   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:33.919569   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:33.919735   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:33.919910   70237 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:33.920145   70237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0127 11:43:33.920158   70237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:43:34.031475   70237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:43:34.031502   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetMachineName
	I0127 11:43:34.031751   70237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-407489"
	I0127 11:43:34.031775   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetMachineName
	I0127 11:43:34.031925   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.034582   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.034991   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.035023   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.035167   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:34.035379   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.035535   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.035708   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:34.035888   70237 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:34.036067   70237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0127 11:43:34.036084   70237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-407489 && echo "default-k8s-diff-port-407489" | sudo tee /etc/hostname
	I0127 11:43:34.160481   70237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-407489
	
	I0127 11:43:34.160517   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.163235   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.163563   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.163591   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.163765   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:34.163947   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.164159   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.164329   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:34.164549   70237 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:34.164762   70237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0127 11:43:34.164788   70237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-407489' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-407489/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-407489' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:43:34.284532   70237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:43:34.284555   70237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:43:34.284601   70237 buildroot.go:174] setting up certificates
	I0127 11:43:34.284610   70237 provision.go:84] configureAuth start
	I0127 11:43:34.284618   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetMachineName
	I0127 11:43:34.284887   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetIP
	I0127 11:43:34.287815   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.288214   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.288250   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.288448   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.290847   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.291233   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.291274   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.291362   70237 provision.go:143] copyHostCerts
	I0127 11:43:34.291442   70237 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:43:34.291463   70237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:43:34.291539   70237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:43:34.291696   70237 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:43:34.291708   70237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:43:34.291751   70237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:43:34.291851   70237 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:43:34.291861   70237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:43:34.291901   70237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:43:34.291973   70237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-407489 san=[127.0.0.1 192.168.39.69 default-k8s-diff-port-407489 localhost minikube]
	I0127 11:43:34.423495   70237 provision.go:177] copyRemoteCerts
	I0127 11:43:34.423558   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:43:34.423582   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.426319   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.426655   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.426691   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.426913   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:34.427104   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.427293   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:34.427458   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:43:34.513171   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:43:34.537245   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 11:43:34.559633   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:43:34.583750   70237 provision.go:87] duration metric: took 299.129782ms to configureAuth
	I0127 11:43:34.583775   70237 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:43:34.583937   70237 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:43:34.584073   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.586622   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.586931   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.586966   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.587080   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:34.587272   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.587442   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.587579   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:34.587738   70237 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:34.587886   70237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0127 11:43:34.587905   70237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:43:34.811453   70237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:43:34.811491   70237 machine.go:96] duration metric: took 895.210903ms to provisionDockerMachine
	I0127 11:43:34.811508   70237 start.go:293] postStartSetup for "default-k8s-diff-port-407489" (driver="kvm2")
	I0127 11:43:34.811523   70237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:43:34.811550   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:34.811890   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:43:34.811923   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.814689   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.815046   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.815080   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.815249   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:34.815470   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.815651   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:34.815763   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:43:34.899008   70237 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:43:34.902923   70237 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:43:34.902952   70237 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:43:34.903022   70237 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:43:34.903111   70237 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:43:34.903222   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:43:34.913948   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:43:34.938225   70237 start.go:296] duration metric: took 126.699024ms for postStartSetup
	I0127 11:43:34.938271   70237 fix.go:56] duration metric: took 18.870482253s for fixHost
	I0127 11:43:34.938298   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:34.941222   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.941572   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:34.941601   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:34.941784   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:34.941983   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.942120   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:34.942285   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:34.942459   70237 main.go:141] libmachine: Using SSH client type: native
	I0127 11:43:34.942684   70237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0127 11:43:34.942697   70237 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:43:35.060113   70237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978215.017760583
	
	I0127 11:43:35.060145   70237 fix.go:216] guest clock: 1737978215.017760583
	I0127 11:43:35.060156   70237 fix.go:229] Guest: 2025-01-27 11:43:35.017760583 +0000 UTC Remote: 2025-01-27 11:43:34.938276119 +0000 UTC m=+19.014362158 (delta=79.484464ms)
	I0127 11:43:35.060184   70237 fix.go:200] guest clock delta is within tolerance: 79.484464ms
	I0127 11:43:35.060191   70237 start.go:83] releasing machines lock for "default-k8s-diff-port-407489", held for 18.992412356s
	I0127 11:43:35.060224   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:35.060466   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetIP
	I0127 11:43:35.063248   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:35.063681   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:35.063710   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:35.063962   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:35.064461   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:35.064637   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:43:35.064727   70237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:43:35.064772   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:35.064828   70237 ssh_runner.go:195] Run: cat /version.json
	I0127 11:43:35.064850   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:43:35.067600   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:35.067966   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:35.068002   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:35.068023   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:35.068215   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:35.068389   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:35.068456   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:35.068495   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:35.068519   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:35.068705   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:43:35.068719   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:43:35.068836   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:43:35.068998   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:43:35.069140   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:43:35.183565   70237 ssh_runner.go:195] Run: systemctl --version
	I0127 11:43:35.189424   70237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:43:35.341588   70237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:43:35.348755   70237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:43:35.348835   70237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:43:35.365386   70237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:43:35.365434   70237 start.go:495] detecting cgroup driver to use...
	I0127 11:43:35.365508   70237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:43:35.381003   70237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:43:35.393445   70237 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:43:35.393501   70237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:43:35.408046   70237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:43:35.421794   70237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:43:35.533177   70237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:43:35.679248   70237 docker.go:233] disabling docker service ...
	I0127 11:43:35.679306   70237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:43:35.693498   70237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:43:35.706048   70237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:43:35.854085   70237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:43:35.966154   70237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:43:35.979855   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:43:35.997517   70237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:43:35.997585   70237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.007174   70237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:43:36.007231   70237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.016615   70237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.027152   70237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.036791   70237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:43:36.047142   70237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.056957   70237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.073346   70237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:43:36.082914   70237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:43:36.092118   70237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:43:36.092169   70237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:43:36.104653   70237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:43:36.113398   70237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:43:36.224992   70237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:43:36.309741   70237 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:43:36.309802   70237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:43:36.314588   70237 start.go:563] Will wait 60s for crictl version
	I0127 11:43:36.314652   70237 ssh_runner.go:195] Run: which crictl
	I0127 11:43:36.317963   70237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:43:36.351895   70237 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:43:36.351987   70237 ssh_runner.go:195] Run: crio --version
	I0127 11:43:36.380147   70237 ssh_runner.go:195] Run: crio --version
	I0127 11:43:36.407481   70237 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 11:43:36.409073   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetIP
	I0127 11:43:36.411544   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:36.411904   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:43:36.411936   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:43:36.412111   70237 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 11:43:36.415814   70237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:43:36.427201   70237 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-407489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-407
489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:43:36.427306   70237 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:43:36.427344   70237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:43:36.461300   70237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 11:43:36.461376   70237 ssh_runner.go:195] Run: which lz4
	I0127 11:43:36.465186   70237 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:43:36.468982   70237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:43:36.469009   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 11:43:37.713951   70237 crio.go:462] duration metric: took 1.248798312s to copy over tarball
	I0127 11:43:37.714024   70237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:43:39.787929   70237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.073877879s)
	I0127 11:43:39.787956   70237 crio.go:469] duration metric: took 2.073973853s to extract the tarball
	I0127 11:43:39.787962   70237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:43:39.823589   70237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:43:39.870151   70237 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:43:39.870174   70237 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:43:39.870183   70237 kubeadm.go:934] updating node { 192.168.39.69 8444 v1.32.1 crio true true} ...
	I0127 11:43:39.870302   70237 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-407489 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-407489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:43:39.870387   70237 ssh_runner.go:195] Run: crio config
	I0127 11:43:39.915553   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:43:39.915578   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:43:39.915587   70237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:43:39.915635   70237 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-407489 NodeName:default-k8s-diff-port-407489 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:43:39.915771   70237 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-407489"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:43:39.915841   70237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:43:39.925267   70237 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:43:39.925328   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:43:39.934103   70237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0127 11:43:39.950000   70237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:43:39.965091   70237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0127 11:43:39.980306   70237 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I0127 11:43:39.984124   70237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:43:39.996881   70237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:43:40.136692   70237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:43:40.153495   70237 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489 for IP: 192.168.39.69
	I0127 11:43:40.153522   70237 certs.go:194] generating shared ca certs ...
	I0127 11:43:40.153543   70237 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:43:40.153746   70237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:43:40.153799   70237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:43:40.153813   70237 certs.go:256] generating profile certs ...
	I0127 11:43:40.153918   70237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.key
	I0127 11:43:40.154009   70237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/apiserver.key.a6772bc3
	I0127 11:43:40.154063   70237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/proxy-client.key
	I0127 11:43:40.154216   70237 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:43:40.154250   70237 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:43:40.154257   70237 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:43:40.154279   70237 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:43:40.154310   70237 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:43:40.154332   70237 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:43:40.154369   70237 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:43:40.154998   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:43:40.202628   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:43:40.240315   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:43:40.273897   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:43:40.299909   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 11:43:40.324259   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:43:40.347622   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:43:40.370346   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:43:40.393214   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:43:40.415264   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:43:40.436579   70237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:43:40.458389   70237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:43:40.473672   70237 ssh_runner.go:195] Run: openssl version
	I0127 11:43:40.479012   70237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:43:40.489588   70237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:43:40.493669   70237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:43:40.493713   70237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:43:40.499651   70237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:43:40.509706   70237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:43:40.520263   70237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:43:40.524706   70237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:43:40.524758   70237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:43:40.530179   70237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:43:40.540081   70237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:43:40.549829   70237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:43:40.553865   70237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:43:40.553906   70237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:43:40.558971   70237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:43:40.569220   70237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:43:40.573547   70237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:43:40.583595   70237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:43:40.589477   70237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:43:40.594927   70237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:43:40.600612   70237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:43:40.606005   70237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:43:40.611395   70237 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-407489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-407489
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:43:40.611472   70237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:43:40.611519   70237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:43:40.653742   70237 cri.go:89] found id: ""
	I0127 11:43:40.653823   70237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:43:40.663483   70237 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:43:40.663503   70237 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:43:40.663551   70237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:43:40.673578   70237 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:43:40.674266   70237 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-407489" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:43:40.674549   70237 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-407489" cluster setting kubeconfig missing "default-k8s-diff-port-407489" context setting]
	I0127 11:43:40.675003   70237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:43:40.676291   70237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:43:40.687483   70237 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.69
	I0127 11:43:40.687517   70237 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:43:40.687529   70237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:43:40.687583   70237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:43:40.722217   70237 cri.go:89] found id: ""
	I0127 11:43:40.722298   70237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:43:40.739306   70237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:43:40.749839   70237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:43:40.749858   70237 kubeadm.go:157] found existing configuration files:
	
	I0127 11:43:40.749905   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:43:40.757999   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:43:40.758049   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:43:40.767047   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:43:40.775477   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:43:40.775536   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:43:40.785208   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:43:40.793897   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:43:40.793958   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:43:40.802543   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:43:40.810886   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:43:40.810938   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:43:40.819558   70237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:43:40.828349   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:40.934253   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:42.157711   70237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223424064s)
	I0127 11:43:42.157737   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:42.534930   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:42.593883   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:42.640703   70237 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:43:42.640775   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:43.141911   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:43.641813   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:44.141565   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:43:44.156425   70237 api_server.go:72] duration metric: took 1.515723375s to wait for apiserver process to appear ...
	I0127 11:43:44.156455   70237 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:43:44.156477   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:43:44.157026   70237 api_server.go:269] stopped: https://192.168.39.69:8444/healthz: Get "https://192.168.39.69:8444/healthz": dial tcp 192.168.39.69:8444: connect: connection refused
	I0127 11:43:44.656766   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:43:46.910194   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:43:46.910219   70237 api_server.go:103] status: https://192.168.39.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:43:46.910232   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:43:46.961892   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 11:43:46.961925   70237 api_server.go:103] status: https://192.168.39.69:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 11:43:47.157339   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:43:47.162211   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:43:47.162240   70237 api_server.go:103] status: https://192.168.39.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:43:47.656871   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:43:47.661667   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 11:43:47.661692   70237 api_server.go:103] status: https://192.168.39.69:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 11:43:48.157444   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:43:48.162900   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 200:
	ok
	I0127 11:43:48.170869   70237 api_server.go:141] control plane version: v1.32.1
	I0127 11:43:48.170902   70237 api_server.go:131] duration metric: took 4.014439828s to wait for apiserver health ...
	I0127 11:43:48.170914   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:43:48.170923   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:43:48.172621   70237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:43:48.174257   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:43:48.190274   70237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:43:48.230883   70237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:43:48.240180   70237 system_pods.go:59] 8 kube-system pods found
	I0127 11:43:48.240210   70237 system_pods.go:61] "coredns-668d6bf9bc-psrb6" [6ef3c1b8-a00a-4df3-8811-8ade63c4271f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:43:48.240219   70237 system_pods.go:61] "etcd-default-k8s-diff-port-407489" [fb71e5c7-b69b-4cab-a498-d37b465d7a57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 11:43:48.240230   70237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-407489" [d138f1d3-9483-4fd1-862e-c04207ac83ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 11:43:48.240249   70237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-407489" [f08c1556-8863-47ed-ae4d-c255797c2546] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 11:43:48.240262   70237 system_pods.go:61] "kube-proxy-dswsf" [31fb635d-b654-4890-bb6e-23449e2014ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 11:43:48.240272   70237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-407489" [7c1dc058-074d-440c-8aff-41e9d9d23c06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:43:48.240282   70237 system_pods.go:61] "metrics-server-f79f97bbb-swwsl" [91378ff8-af97-4518-94a4-8ee7673f6b97] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:43:48.240294   70237 system_pods.go:61] "storage-provisioner" [71fbd417-0c7f-4c8d-b24f-efa4fc371ac5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 11:43:48.240305   70237 system_pods.go:74] duration metric: took 9.402064ms to wait for pod list to return data ...
	I0127 11:43:48.240315   70237 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:43:48.243768   70237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:43:48.243800   70237 node_conditions.go:123] node cpu capacity is 2
	I0127 11:43:48.243814   70237 node_conditions.go:105] duration metric: took 3.489641ms to run NodePressure ...
	I0127 11:43:48.243836   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:43:48.635126   70237 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 11:43:48.639192   70237 kubeadm.go:739] kubelet initialised
	I0127 11:43:48.639222   70237 kubeadm.go:740] duration metric: took 4.063122ms waiting for restarted kubelet to initialise ...
	I0127 11:43:48.639232   70237 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:43:48.644980   70237 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-psrb6" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:50.650983   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-psrb6" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:53.151180   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-psrb6" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:53.151203   70237 pod_ready.go:82] duration metric: took 4.506196823s for pod "coredns-668d6bf9bc-psrb6" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:53.151212   70237 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:55.156957   70237 pod_ready.go:103] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:57.157092   70237 pod_ready.go:103] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"False"
	I0127 11:43:59.158222   70237 pod_ready.go:93] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:59.158244   70237 pod_ready.go:82] duration metric: took 6.007026315s for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.158253   70237 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.163064   70237 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:59.163083   70237 pod_ready.go:82] duration metric: took 4.823704ms for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.163092   70237 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.167210   70237 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:59.167224   70237 pod_ready.go:82] duration metric: took 4.126762ms for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.167232   70237 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dswsf" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.170844   70237 pod_ready.go:93] pod "kube-proxy-dswsf" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:59.170861   70237 pod_ready.go:82] duration metric: took 3.624137ms for pod "kube-proxy-dswsf" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.170869   70237 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.175114   70237 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:43:59.175132   70237 pod_ready.go:82] duration metric: took 4.257096ms for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:43:59.175142   70237 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" ...
	I0127 11:44:01.181442   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:03.182465   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:05.681127   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:07.681490   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:09.681581   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:12.181135   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.681538   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.182032   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.184122   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:21.681373   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:23.682555   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.181747   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.681419   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.681567   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.681781   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:34.682249   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.181878   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.183457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:41.682339   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.682496   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.181944   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.681423   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.181432   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.681540   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.182005   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.681494   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:01.181668   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.182704   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.681195   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:08.180735   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.181326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:12.181440   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:14.182012   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:16.681535   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.181289   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:21.682460   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.181465   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.181841   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.680961   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.185937   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.680940   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:35.681777   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:37.682410   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.182049   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:42.182202   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.680856   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:47.182717   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:49.681160   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:51.681649   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:53.681709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:56.181754   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.682655   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.181382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.681326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:05.682184   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.181149   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.681951   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.181930   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.681382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.181077   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.181255   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.682762   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.180989   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.181377   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.682869   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.181312   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.181612   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.181772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.181835   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.682630   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.181118   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.185722   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:47.682388   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.180618   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.182237   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:54.680772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:56.683260   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:59.183917   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:01.682086   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:04.182095   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:06.681477   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:08.681667   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:11.180827   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.681189   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.682069   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:17.682390   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.182895   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:22.681079   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:24.681767   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:26.683550   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.183056   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:31.183154   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.184826   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.682393   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:38.182709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:40.681322   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:43.182256   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:45.682893   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:48.185457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:50.682230   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:53.181163   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:55.181247   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:57.684463   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:59.175404   70237 pod_ready.go:82] duration metric: took 4m0.000243677s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" ...
	E0127 11:47:59.175451   70237 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:47:59.175501   70237 pod_ready.go:39] duration metric: took 4m10.536256424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:59.175547   70237 kubeadm.go:597] duration metric: took 4m18.512037331s to restartPrimaryControlPlane
	W0127 11:47:59.175647   70237 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:47:59.175705   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:26.746474   70237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.570747097s)
	I0127 11:48:26.746545   70237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:26.762637   70237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:26.776063   70237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:26.789742   70237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:26.789766   70237 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:26.789818   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:48:26.800449   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:26.800505   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:26.818106   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:48:26.827104   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:26.827167   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:26.844719   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.861215   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:26.861299   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.877899   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:48:26.886638   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:26.886691   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:26.895347   70237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:27.038970   70237 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:34.381659   70237 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:48:34.381747   70237 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:48:34.381834   70237 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:48:34.382006   70237 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:48:34.382166   70237 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:48:34.382273   70237 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:48:34.384155   70237 out.go:235]   - Generating certificates and keys ...
	I0127 11:48:34.384281   70237 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:48:34.384383   70237 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:48:34.384475   70237 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:48:34.384540   70237 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:48:34.384619   70237 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:48:34.384712   70237 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:48:34.384815   70237 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:48:34.384870   70237 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:48:34.384936   70237 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:48:34.385045   70237 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:48:34.385125   70237 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:48:34.385205   70237 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:48:34.385276   70237 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:48:34.385331   70237 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:48:34.385408   70237 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:48:34.385500   70237 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:48:34.385576   70237 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:48:34.385691   70237 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:48:34.385779   70237 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:48:34.387105   70237 out.go:235]   - Booting up control plane ...
	I0127 11:48:34.387208   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:48:34.387285   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:48:34.387359   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:48:34.387457   70237 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:48:34.387545   70237 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:48:34.387589   70237 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:48:34.387728   70237 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:48:34.387818   70237 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:48:34.387875   70237 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001607262s
	I0127 11:48:34.387947   70237 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:48:34.388039   70237 kubeadm.go:310] [api-check] The API server is healthy after 4.002263796s
	I0127 11:48:34.388196   70237 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:48:34.388338   70237 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:48:34.388399   70237 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:48:34.388623   70237 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-407489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:48:34.388706   70237 kubeadm.go:310] [bootstrap-token] Using token: n96bmw.dtq43nz27fzxgr8y
	I0127 11:48:34.390189   70237 out.go:235]   - Configuring RBAC rules ...
	I0127 11:48:34.390316   70237 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:48:34.390409   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:48:34.390579   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:48:34.390756   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:48:34.390876   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:48:34.390986   70237 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:48:34.391159   70237 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:48:34.391231   70237 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:48:34.391299   70237 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:48:34.391310   70237 kubeadm.go:310] 
	I0127 11:48:34.391403   70237 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:48:34.391413   70237 kubeadm.go:310] 
	I0127 11:48:34.391518   70237 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:48:34.391530   70237 kubeadm.go:310] 
	I0127 11:48:34.391577   70237 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:48:34.391699   70237 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:48:34.391769   70237 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:48:34.391776   70237 kubeadm.go:310] 
	I0127 11:48:34.391868   70237 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:48:34.391882   70237 kubeadm.go:310] 
	I0127 11:48:34.391943   70237 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:48:34.391952   70237 kubeadm.go:310] 
	I0127 11:48:34.392024   70237 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:48:34.392099   70237 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:48:34.392204   70237 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:48:34.392219   70237 kubeadm.go:310] 
	I0127 11:48:34.392359   70237 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:48:34.392465   70237 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:48:34.392480   70237 kubeadm.go:310] 
	I0127 11:48:34.392616   70237 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.392829   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:48:34.392944   70237 kubeadm.go:310] 	--control-plane 
	I0127 11:48:34.392963   70237 kubeadm.go:310] 
	I0127 11:48:34.393089   70237 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:48:34.393100   70237 kubeadm.go:310] 
	I0127 11:48:34.393184   70237 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.393325   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:48:34.393340   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:48:34.393350   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:48:34.394995   70237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:48:34.396212   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:48:34.408954   70237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:48:34.431113   70237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:48:34.431252   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:34.431257   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-407489 minikube.k8s.io/updated_at=2025_01_27T11_48_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=default-k8s-diff-port-407489 minikube.k8s.io/primary=true
	I0127 11:48:34.469468   70237 ops.go:34] apiserver oom_adj: -16
	I0127 11:48:34.666106   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.167035   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.667149   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.167156   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.666148   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.167090   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.667139   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.166714   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.666209   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.166966   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.353909   70237 kubeadm.go:1113] duration metric: took 4.922724686s to wait for elevateKubeSystemPrivileges
	I0127 11:48:39.353963   70237 kubeadm.go:394] duration metric: took 4m58.742572387s to StartCluster
	I0127 11:48:39.353997   70237 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.354112   70237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:48:39.356217   70237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.356516   70237 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:48:39.356640   70237 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:48:39.356750   70237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356786   70237 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356793   70237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356805   70237 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356806   70237 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356812   70237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-407489"
	W0127 11:48:39.356815   70237 addons.go:247] addon metrics-server should already be in state true
	W0127 11:48:39.356814   70237 addons.go:247] addon dashboard should already be in state true
	W0127 11:48:39.356785   70237 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356919   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356780   70237 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.357367   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357421   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357452   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357461   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357470   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357481   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357489   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357427   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.358335   70237 out.go:177] * Verifying Kubernetes components...
	I0127 11:48:39.359875   70237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:48:39.375814   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0127 11:48:39.376161   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0127 11:48:39.376320   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376584   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376816   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376834   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.376964   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376976   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.377329   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.377542   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.377878   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.378406   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.378448   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.378664   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0127 11:48:39.378707   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0127 11:48:39.379469   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.379520   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.380020   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.380031   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.380391   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.380901   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.380937   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.381376   70237 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-407489"
	W0127 11:48:39.381392   70237 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:48:39.381420   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.381774   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.381828   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.382425   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.382444   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.382932   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.383472   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.383515   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.399683   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0127 11:48:39.400302   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.400882   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.400901   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.401296   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.401495   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0127 11:48:39.401654   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0127 11:48:39.401894   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.401947   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402556   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402892   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402909   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.402980   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402997   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.403362   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0127 11:48:39.403805   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.403823   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.404268   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.404296   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.404472   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.404848   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.404929   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.405710   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.405726   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.406261   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.406477   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.406675   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.407171   70237 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:48:39.408344   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.408427   70237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:48:39.409688   70237 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:48:39.409753   70237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:48:39.409927   70237 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.409949   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:48:39.409969   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410883   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:48:39.410891   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:48:39.410900   70237 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:48:39.410901   70237 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.414712   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415032   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415363   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415380   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415508   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415557   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.415793   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415795   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.415811   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415965   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416023   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416188   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.416193   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416207   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.416226   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.416326   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416464   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416647   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416856   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.417093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.417232   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.425335   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0127 11:48:39.425726   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.426147   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.426164   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.426496   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.426691   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.428519   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.428734   70237 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.428750   70237 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:48:39.428767   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.431736   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.431955   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.431979   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.432148   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.432352   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.432522   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.432669   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.622216   70237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:48:39.650134   70237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677286   70237 node_ready.go:49] node "default-k8s-diff-port-407489" has status "Ready":"True"
	I0127 11:48:39.677309   70237 node_ready.go:38] duration metric: took 27.135622ms for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677318   70237 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:39.687667   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:39.731665   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.746831   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.793916   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:48:39.793939   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:48:39.875140   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:48:39.875167   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:48:39.930947   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:48:39.930970   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:48:39.943793   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:48:39.943816   70237 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:48:39.993962   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:48:39.993993   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:48:40.041925   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:48:40.041962   70237 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:48:40.045715   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:48:40.045733   70237 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:48:40.168240   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:48:40.168261   70237 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:48:40.170308   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.170329   70237 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:48:40.222208   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:48:40.222229   70237 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:48:40.226028   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.312875   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:48:40.312990   70237 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:48:40.389058   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.389088   70237 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:48:40.437979   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.764016   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017148966s)
	I0127 11:48:40.764080   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764098   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032393238s)
	I0127 11:48:40.764145   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764163   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764466   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764476   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:40.764483   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764520   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764535   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764525   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764555   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764564   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764785   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764804   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764924   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.781921   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.781947   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.782236   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.782254   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294495   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.068429548s)
	I0127 11:48:41.294547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294560   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.294909   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.294914   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.294937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294945   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294952   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.295173   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.295220   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.295238   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.295255   70237 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:41.723523   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:41.929362   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.491326001s)
	I0127 11:48:41.929422   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929437   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.929779   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.929797   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.929815   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929825   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.930103   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.930125   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.930151   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.931487   70237 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-407489 addons enable metrics-server
	
	I0127 11:48:41.933107   70237 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:48:41.934427   70237 addons.go:514] duration metric: took 2.577793658s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:48:44.193593   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:46.196598   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:48.696840   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:49.199550   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.199588   70237 pod_ready.go:82] duration metric: took 9.511896787s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.199600   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205893   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.205926   70237 pod_ready.go:82] duration metric: took 6.298932ms for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205940   70237 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239052   70237 pod_ready.go:93] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.239081   70237 pod_ready.go:82] duration metric: took 33.131129ms for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239094   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265456   70237 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.265491   70237 pod_ready.go:82] duration metric: took 26.386948ms for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265505   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272301   70237 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.272330   70237 pod_ready.go:82] duration metric: took 6.816295ms for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272342   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591592   70237 pod_ready.go:93] pod "kube-proxy-26pw8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.591640   70237 pod_ready.go:82] duration metric: took 319.289955ms for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591655   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991689   70237 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.991721   70237 pod_ready.go:82] duration metric: took 400.056967ms for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991733   70237 pod_ready.go:39] duration metric: took 10.314402994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:49.991751   70237 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:48:49.991813   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:50.013067   70237 api_server.go:72] duration metric: took 10.656516392s to wait for apiserver process to appear ...
	I0127 11:48:50.013088   70237 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:48:50.013114   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:48:50.018115   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 200:
	ok
	I0127 11:48:50.019049   70237 api_server.go:141] control plane version: v1.32.1
	I0127 11:48:50.019078   70237 api_server.go:131] duration metric: took 5.982015ms to wait for apiserver health ...
	I0127 11:48:50.019088   70237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:48:50.196032   70237 system_pods.go:59] 9 kube-system pods found
	I0127 11:48:50.196064   70237 system_pods.go:61] "coredns-668d6bf9bc-pd5ml" [c33b4c24-e93a-4370-a289-6dca24315394] Running
	I0127 11:48:50.196070   70237 system_pods.go:61] "coredns-668d6bf9bc-sdf87" [30fc6237-1829-4315-b9cf-3354bd7a96a5] Running
	I0127 11:48:50.196075   70237 system_pods.go:61] "etcd-default-k8s-diff-port-407489" [d228476b-110d-4de7-9afe-08c2371bbb0e] Running
	I0127 11:48:50.196079   70237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-407489" [a059a0c6-34f1-46c3-9b67-adef842174f9] Running
	I0127 11:48:50.196083   70237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-407489" [aa65ad17-6f66-42c1-ad23-199b374d2104] Running
	I0127 11:48:50.196087   70237 system_pods.go:61] "kube-proxy-26pw8" [c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510] Running
	I0127 11:48:50.196090   70237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-407489" [190cc5cb-ab22-4143-a84a-3c4d975728c3] Running
	I0127 11:48:50.196098   70237 system_pods.go:61] "metrics-server-f79f97bbb-d7r6d" [6bd8680e-8338-48a2-b29b-a913d195bc9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:48:50.196102   70237 system_pods.go:61] "storage-provisioner" [58b014bb-8629-4398-a2ec-6ec95fa59111] Running
	I0127 11:48:50.196111   70237 system_pods.go:74] duration metric: took 177.016669ms to wait for pod list to return data ...
	I0127 11:48:50.196118   70237 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:48:50.392617   70237 default_sa.go:45] found service account: "default"
	I0127 11:48:50.392652   70237 default_sa.go:55] duration metric: took 196.52383ms for default service account to be created ...
	I0127 11:48:50.392664   70237 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:48:50.594360   70237 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-407489 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407489 -n default-k8s-diff-port-407489
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-407489 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-407489 logs -n 25: (1.459486753s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-673007 sudo journalctl                       | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo docker                           | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo                                  | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo cat                              | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo containerd                       | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo systemctl                        | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo find                             | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-673007 sudo crio                             | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-673007                                       | auto-673007           | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	| start   | -p calico-673007 --memory=3072                       | calico-673007         | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| delete  | -p embed-certs-986409                                | embed-certs-986409    | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	| start   | -p custom-flannel-673007                             | custom-flannel-673007 | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:09:39
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:09:39.260906   79591 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:09:39.261165   79591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:39.261175   79591 out.go:358] Setting ErrFile to fd 2...
	I0127 12:09:39.261180   79591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:39.261414   79591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 12:09:39.262007   79591 out.go:352] Setting JSON to false
	I0127 12:09:39.263021   79591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10279,"bootTime":1737969500,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:09:39.263114   79591 start.go:139] virtualization: kvm guest
	I0127 12:09:39.265106   79591 out.go:177] * [custom-flannel-673007] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:09:39.267007   79591 notify.go:220] Checking for updates...
	I0127 12:09:39.267038   79591 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 12:09:39.268546   79591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:09:39.270057   79591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 12:09:39.271658   79591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 12:09:39.273195   79591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:09:39.274778   79591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:09:39.276793   79591 config.go:182] Loaded profile config "calico-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:39.276961   79591 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:39.277076   79591 config.go:182] Loaded profile config "kindnet-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:39.277191   79591 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:09:39.314168   79591 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:09:39.315798   79591 start.go:297] selected driver: kvm2
	I0127 12:09:39.315818   79591 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:09:39.315840   79591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:09:39.316567   79591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:39.316633   79591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:09:39.332739   79591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:09:39.332795   79591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:09:39.333082   79591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:09:39.333119   79591 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0127 12:09:39.333142   79591 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0127 12:09:39.333213   79591 start.go:340] cluster config:
	{Name:custom-flannel-673007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:custom-flannel-673007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:09:39.333346   79591 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:39.335280   79591 out.go:177] * Starting "custom-flannel-673007" primary control-plane node in "custom-flannel-673007" cluster
	I0127 12:09:36.076119   79377 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:09:36.076169   79377 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:09:36.076180   79377 cache.go:56] Caching tarball of preloaded images
	I0127 12:09:36.076274   79377 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:09:36.076290   79377 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:09:36.076403   79377 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/calico-673007/config.json ...
	I0127 12:09:36.076434   79377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/calico-673007/config.json: {Name:mk62a0d8c8c973ad3beb367fc4d820463e6ec9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:36.076609   79377 start.go:360] acquireMachinesLock for calico-673007: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:09:38.302682   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:38.303232   77963 main.go:141] libmachine: (kindnet-673007) DBG | unable to find current IP address of domain kindnet-673007 in network mk-kindnet-673007
	I0127 12:09:38.303398   77963 main.go:141] libmachine: (kindnet-673007) DBG | I0127 12:09:38.303224   77987 retry.go:31] will retry after 3.463393462s: waiting for domain to come up
	I0127 12:09:41.769250   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:41.769958   77963 main.go:141] libmachine: (kindnet-673007) DBG | unable to find current IP address of domain kindnet-673007 in network mk-kindnet-673007
	I0127 12:09:41.769980   77963 main.go:141] libmachine: (kindnet-673007) DBG | I0127 12:09:41.769924   77987 retry.go:31] will retry after 4.275179565s: waiting for domain to come up
	I0127 12:09:39.336570   79591 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:09:39.336611   79591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:09:39.336618   79591 cache.go:56] Caching tarball of preloaded images
	I0127 12:09:39.336698   79591 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:09:39.336709   79591 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:09:39.336789   79591 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/custom-flannel-673007/config.json ...
	I0127 12:09:39.336806   79591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/custom-flannel-673007/config.json: {Name:mk50952fd1aac38f68921e49c1b62b26059e966c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:39.336925   79591 start.go:360] acquireMachinesLock for custom-flannel-673007: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:09:47.656132   79377 start.go:364] duration metric: took 11.579456108s to acquireMachinesLock for "calico-673007"
	I0127 12:09:47.656201   79377 start.go:93] Provisioning new machine with config: &{Name:calico-673007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-673007 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:09:47.656376   79377 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:09:46.046272   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.046834   77963 main.go:141] libmachine: (kindnet-673007) found domain IP: 192.168.50.91
	I0127 12:09:46.046867   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has current primary IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.046875   77963 main.go:141] libmachine: (kindnet-673007) reserving static IP address...
	I0127 12:09:46.047280   77963 main.go:141] libmachine: (kindnet-673007) DBG | unable to find host DHCP lease matching {name: "kindnet-673007", mac: "52:54:00:39:d9:0b", ip: "192.168.50.91"} in network mk-kindnet-673007
	I0127 12:09:46.121005   77963 main.go:141] libmachine: (kindnet-673007) DBG | Getting to WaitForSSH function...
	I0127 12:09:46.121035   77963 main.go:141] libmachine: (kindnet-673007) reserved static IP address 192.168.50.91 for domain kindnet-673007
	I0127 12:09:46.121055   77963 main.go:141] libmachine: (kindnet-673007) waiting for SSH...
	I0127 12:09:46.123797   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.124122   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.124167   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.124205   77963 main.go:141] libmachine: (kindnet-673007) DBG | Using SSH client type: external
	I0127 12:09:46.124222   77963 main.go:141] libmachine: (kindnet-673007) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kindnet-673007/id_rsa (-rw-------)
	I0127 12:09:46.124253   77963 main.go:141] libmachine: (kindnet-673007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/kindnet-673007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:09:46.124277   77963 main.go:141] libmachine: (kindnet-673007) DBG | About to run SSH command:
	I0127 12:09:46.124294   77963 main.go:141] libmachine: (kindnet-673007) DBG | exit 0
	I0127 12:09:46.251442   77963 main.go:141] libmachine: (kindnet-673007) DBG | SSH cmd err, output: <nil>: 
	I0127 12:09:46.251729   77963 main.go:141] libmachine: (kindnet-673007) KVM machine creation complete
	I0127 12:09:46.252041   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetConfigRaw
	I0127 12:09:46.252573   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:46.252745   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:46.252862   77963 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:09:46.252878   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetState
	I0127 12:09:46.254127   77963 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:09:46.254145   77963 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:09:46.254152   77963 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:09:46.254160   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:46.256420   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.256880   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.256907   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.257054   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:46.257242   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.257396   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.257538   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:46.257659   77963 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:46.257830   77963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.91 22 <nil> <nil>}
	I0127 12:09:46.257840   77963 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:09:46.370660   77963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:09:46.370689   77963 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:09:46.370699   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:46.373883   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.374287   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.374318   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.374483   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:46.374687   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.374831   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.374982   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:46.375130   77963 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:46.375303   77963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.91 22 <nil> <nil>}
	I0127 12:09:46.375320   77963 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:09:46.488300   77963 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:09:46.488405   77963 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:09:46.488433   77963 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:09:46.488449   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetMachineName
	I0127 12:09:46.488744   77963 buildroot.go:166] provisioning hostname "kindnet-673007"
	I0127 12:09:46.488777   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetMachineName
	I0127 12:09:46.488977   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:46.491689   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.492073   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.492100   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.492200   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:46.492374   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.492536   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.492659   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:46.492803   77963 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:46.492974   77963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.91 22 <nil> <nil>}
	I0127 12:09:46.492987   77963 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-673007 && echo "kindnet-673007" | sudo tee /etc/hostname
	I0127 12:09:46.616503   77963 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-673007
	
	I0127 12:09:46.616529   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:46.619490   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.619811   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.619839   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.619979   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:46.620162   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.620304   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:46.620467   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:46.620658   77963 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:46.620856   77963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.91 22 <nil> <nil>}
	I0127 12:09:46.620881   77963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-673007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-673007/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-673007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:09:46.740001   77963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:09:46.740030   77963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 12:09:46.740085   77963 buildroot.go:174] setting up certificates
	I0127 12:09:46.740100   77963 provision.go:84] configureAuth start
	I0127 12:09:46.740119   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetMachineName
	I0127 12:09:46.740406   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetIP
	I0127 12:09:46.743322   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.743674   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.743713   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.743861   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:46.746184   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.746507   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:46.746531   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:46.746678   77963 provision.go:143] copyHostCerts
	I0127 12:09:46.746737   77963 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 12:09:46.746758   77963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 12:09:46.746835   77963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 12:09:46.746974   77963 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 12:09:46.746995   77963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 12:09:46.747056   77963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 12:09:46.747145   77963 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 12:09:46.747153   77963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 12:09:46.747188   77963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 12:09:46.747271   77963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.kindnet-673007 san=[127.0.0.1 192.168.50.91 kindnet-673007 localhost minikube]
	I0127 12:09:47.012200   77963 provision.go:177] copyRemoteCerts
	I0127 12:09:47.012253   77963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:09:47.012281   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:47.014836   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.015115   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.015145   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.015302   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:47.015534   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.015698   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:47.015844   77963 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kindnet-673007/id_rsa Username:docker}
	I0127 12:09:47.101189   77963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:09:47.123970   77963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0127 12:09:47.146749   77963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:09:47.169685   77963 provision.go:87] duration metric: took 429.566573ms to configureAuth
	I0127 12:09:47.169719   77963 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:09:47.169873   77963 config.go:182] Loaded profile config "kindnet-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:09:47.169990   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:47.172762   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.173088   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.173118   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.173228   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:47.173411   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.173588   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.173715   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:47.173834   77963 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:47.173978   77963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.91 22 <nil> <nil>}
	I0127 12:09:47.173991   77963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:09:47.404260   77963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:09:47.404291   77963 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:09:47.404298   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetURL
	I0127 12:09:47.405507   77963 main.go:141] libmachine: (kindnet-673007) DBG | using libvirt version 6000000
	I0127 12:09:47.407550   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.407894   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.407926   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.408097   77963 main.go:141] libmachine: Docker is up and running!
	I0127 12:09:47.408114   77963 main.go:141] libmachine: Reticulating splines...
	I0127 12:09:47.408122   77963 client.go:171] duration metric: took 24.131273848s to LocalClient.Create
	I0127 12:09:47.408144   77963 start.go:167] duration metric: took 24.131327608s to libmachine.API.Create "kindnet-673007"
	I0127 12:09:47.408155   77963 start.go:293] postStartSetup for "kindnet-673007" (driver="kvm2")
	I0127 12:09:47.408164   77963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:09:47.408178   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:47.408427   77963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:09:47.408447   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:47.410771   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.411115   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.411143   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.411336   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:47.411510   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.411675   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:47.411852   77963 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kindnet-673007/id_rsa Username:docker}
	I0127 12:09:47.499550   77963 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:09:47.503872   77963 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:09:47.503889   77963 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 12:09:47.503972   77963 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 12:09:47.504066   77963 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 12:09:47.504174   77963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:09:47.515528   77963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 12:09:47.537653   77963 start.go:296] duration metric: took 129.485146ms for postStartSetup
	I0127 12:09:47.537702   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetConfigRaw
	I0127 12:09:47.538305   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetIP
	I0127 12:09:47.541097   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.541548   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.541576   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.541756   77963 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/kindnet-673007/config.json ...
	I0127 12:09:47.541951   77963 start.go:128] duration metric: took 24.283069613s to createHost
	I0127 12:09:47.541979   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:47.544354   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.544729   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.544756   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.544854   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:47.545030   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.545189   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.545361   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:47.545525   77963 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:47.545673   77963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.91 22 <nil> <nil>}
	I0127 12:09:47.545683   77963 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:09:47.655934   77963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737979787.630883459
	
	I0127 12:09:47.655959   77963 fix.go:216] guest clock: 1737979787.630883459
	I0127 12:09:47.655966   77963 fix.go:229] Guest: 2025-01-27 12:09:47.630883459 +0000 UTC Remote: 2025-01-27 12:09:47.541963276 +0000 UTC m=+24.396019138 (delta=88.920183ms)
	I0127 12:09:47.656005   77963 fix.go:200] guest clock delta is within tolerance: 88.920183ms
	I0127 12:09:47.656010   77963 start.go:83] releasing machines lock for "kindnet-673007", held for 24.397229837s
	I0127 12:09:47.656034   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:47.656287   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetIP
	I0127 12:09:47.659717   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.660101   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.660131   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.660264   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:47.660720   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:47.660854   77963 main.go:141] libmachine: (kindnet-673007) Calling .DriverName
	I0127 12:09:47.660945   77963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:09:47.660986   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:47.661040   77963 ssh_runner.go:195] Run: cat /version.json
	I0127 12:09:47.661063   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHHostname
	I0127 12:09:47.663651   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.663925   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.664014   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.664051   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.664184   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:47.664286   77963 main.go:141] libmachine: (kindnet-673007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:d9:0b", ip: ""} in network mk-kindnet-673007: {Iface:virbr3 ExpiryTime:2025-01-27 13:09:38 +0000 UTC Type:0 Mac:52:54:00:39:d9:0b Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:kindnet-673007 Clientid:01:52:54:00:39:d9:0b}
	I0127 12:09:47.664319   77963 main.go:141] libmachine: (kindnet-673007) DBG | domain kindnet-673007 has defined IP address 192.168.50.91 and MAC address 52:54:00:39:d9:0b in network mk-kindnet-673007
	I0127 12:09:47.664347   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.664456   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHPort
	I0127 12:09:47.664526   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:47.664600   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHKeyPath
	I0127 12:09:47.664611   77963 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kindnet-673007/id_rsa Username:docker}
	I0127 12:09:47.664733   77963 main.go:141] libmachine: (kindnet-673007) Calling .GetSSHUsername
	I0127 12:09:47.664889   77963 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/kindnet-673007/id_rsa Username:docker}
	I0127 12:09:47.769393   77963 ssh_runner.go:195] Run: systemctl --version
	I0127 12:09:47.776137   77963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:09:47.929464   77963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:09:47.935452   77963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:09:47.935529   77963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:09:47.951065   77963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:09:47.951086   77963 start.go:495] detecting cgroup driver to use...
	I0127 12:09:47.951158   77963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:09:47.966820   77963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:09:47.980181   77963 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:09:47.980264   77963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:09:47.993022   77963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:09:48.006387   77963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:09:48.118747   77963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:09:48.263033   77963 docker.go:233] disabling docker service ...
	I0127 12:09:48.263099   77963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:09:48.277759   77963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:09:48.290806   77963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:09:48.442648   77963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:09:48.567176   77963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:09:48.580976   77963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:09:48.601649   77963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:09:48.601712   77963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.611642   77963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:09:48.611700   77963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.621433   77963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.631111   77963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.641278   77963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:09:48.654084   77963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.666438   77963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.685142   77963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:09:48.695211   77963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:09:48.704158   77963 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:09:48.704214   77963 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:09:48.717013   77963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:09:48.736934   77963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:09:48.848200   77963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:09:48.944120   77963 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:09:48.944188   77963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:09:48.948875   77963 start.go:563] Will wait 60s for crictl version
	I0127 12:09:48.948926   77963 ssh_runner.go:195] Run: which crictl
	I0127 12:09:48.952571   77963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:09:48.987536   77963 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:09:48.987627   77963 ssh_runner.go:195] Run: crio --version
	I0127 12:09:49.024038   77963 ssh_runner.go:195] Run: crio --version
	I0127 12:09:49.051178   77963 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:09:47.658463   79377 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 12:09:47.658657   79377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:09:47.658686   79377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:09:47.674795   79377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I0127 12:09:47.675148   79377 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:09:47.675756   79377 main.go:141] libmachine: Using API Version  1
	I0127 12:09:47.675779   79377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:09:47.676073   79377 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:09:47.676254   79377 main.go:141] libmachine: (calico-673007) Calling .GetMachineName
	I0127 12:09:47.676401   79377 main.go:141] libmachine: (calico-673007) Calling .DriverName
	I0127 12:09:47.676521   79377 start.go:159] libmachine.API.Create for "calico-673007" (driver="kvm2")
	I0127 12:09:47.676554   79377 client.go:168] LocalClient.Create starting
	I0127 12:09:47.676584   79377 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem
	I0127 12:09:47.676616   79377 main.go:141] libmachine: Decoding PEM data...
	I0127 12:09:47.676629   79377 main.go:141] libmachine: Parsing certificate...
	I0127 12:09:47.676677   79377 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem
	I0127 12:09:47.676695   79377 main.go:141] libmachine: Decoding PEM data...
	I0127 12:09:47.676706   79377 main.go:141] libmachine: Parsing certificate...
	I0127 12:09:47.676720   79377 main.go:141] libmachine: Running pre-create checks...
	I0127 12:09:47.676729   79377 main.go:141] libmachine: (calico-673007) Calling .PreCreateCheck
	I0127 12:09:47.677050   79377 main.go:141] libmachine: (calico-673007) Calling .GetConfigRaw
	I0127 12:09:47.677455   79377 main.go:141] libmachine: Creating machine...
	I0127 12:09:47.677467   79377 main.go:141] libmachine: (calico-673007) Calling .Create
	I0127 12:09:47.677599   79377 main.go:141] libmachine: (calico-673007) creating KVM machine...
	I0127 12:09:47.677614   79377 main.go:141] libmachine: (calico-673007) creating network...
	I0127 12:09:47.678848   79377 main.go:141] libmachine: (calico-673007) DBG | found existing default KVM network
	I0127 12:09:47.679787   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:47.679664   79660 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bc:3d:a5} reservation:<nil>}
	I0127 12:09:47.680486   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:47.680417   79660 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:c3:b2} reservation:<nil>}
	I0127 12:09:47.681305   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:47.681210   79660 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025ce90}
	I0127 12:09:47.681337   79377 main.go:141] libmachine: (calico-673007) DBG | created network xml: 
	I0127 12:09:47.681351   79377 main.go:141] libmachine: (calico-673007) DBG | <network>
	I0127 12:09:47.681358   79377 main.go:141] libmachine: (calico-673007) DBG |   <name>mk-calico-673007</name>
	I0127 12:09:47.681367   79377 main.go:141] libmachine: (calico-673007) DBG |   <dns enable='no'/>
	I0127 12:09:47.681374   79377 main.go:141] libmachine: (calico-673007) DBG |   
	I0127 12:09:47.681383   79377 main.go:141] libmachine: (calico-673007) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 12:09:47.681402   79377 main.go:141] libmachine: (calico-673007) DBG |     <dhcp>
	I0127 12:09:47.681415   79377 main.go:141] libmachine: (calico-673007) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 12:09:47.681424   79377 main.go:141] libmachine: (calico-673007) DBG |     </dhcp>
	I0127 12:09:47.681432   79377 main.go:141] libmachine: (calico-673007) DBG |   </ip>
	I0127 12:09:47.681441   79377 main.go:141] libmachine: (calico-673007) DBG |   
	I0127 12:09:47.681454   79377 main.go:141] libmachine: (calico-673007) DBG | </network>
	I0127 12:09:47.681463   79377 main.go:141] libmachine: (calico-673007) DBG | 
	I0127 12:09:47.686529   79377 main.go:141] libmachine: (calico-673007) DBG | trying to create private KVM network mk-calico-673007 192.168.61.0/24...
	I0127 12:09:47.762289   79377 main.go:141] libmachine: (calico-673007) setting up store path in /home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007 ...
	I0127 12:09:47.762320   79377 main.go:141] libmachine: (calico-673007) DBG | private KVM network mk-calico-673007 192.168.61.0/24 created
	I0127 12:09:47.762332   79377 main.go:141] libmachine: (calico-673007) building disk image from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:09:47.762357   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:47.762210   79660 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 12:09:47.762412   79377 main.go:141] libmachine: (calico-673007) Downloading /home/jenkins/minikube-integration/20319-18835/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:09:48.017713   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:48.017621   79660 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007/id_rsa...
	I0127 12:09:48.098480   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:48.098341   79660 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007/calico-673007.rawdisk...
	I0127 12:09:48.098526   79377 main.go:141] libmachine: (calico-673007) DBG | Writing magic tar header
	I0127 12:09:48.098539   79377 main.go:141] libmachine: (calico-673007) DBG | Writing SSH key tar header
	I0127 12:09:48.098551   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:48.098519   79660 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007 ...
	I0127 12:09:48.098671   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007
	I0127 12:09:48.098689   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube/machines
	I0127 12:09:48.098697   79377 main.go:141] libmachine: (calico-673007) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007 (perms=drwx------)
	I0127 12:09:48.098707   79377 main.go:141] libmachine: (calico-673007) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:09:48.098713   79377 main.go:141] libmachine: (calico-673007) setting executable bit set on /home/jenkins/minikube-integration/20319-18835/.minikube (perms=drwxr-xr-x)
	I0127 12:09:48.098722   79377 main.go:141] libmachine: (calico-673007) setting executable bit set on /home/jenkins/minikube-integration/20319-18835 (perms=drwxrwxr-x)
	I0127 12:09:48.098728   79377 main.go:141] libmachine: (calico-673007) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:09:48.098735   79377 main.go:141] libmachine: (calico-673007) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:09:48.098740   79377 main.go:141] libmachine: (calico-673007) creating domain...
	I0127 12:09:48.098750   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 12:09:48.098758   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-18835
	I0127 12:09:48.098764   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:09:48.098769   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home/jenkins
	I0127 12:09:48.098791   79377 main.go:141] libmachine: (calico-673007) DBG | checking permissions on dir: /home
	I0127 12:09:48.098816   79377 main.go:141] libmachine: (calico-673007) DBG | skipping /home - not owner
	I0127 12:09:48.100034   79377 main.go:141] libmachine: (calico-673007) define libvirt domain using xml: 
	I0127 12:09:48.100060   79377 main.go:141] libmachine: (calico-673007) <domain type='kvm'>
	I0127 12:09:48.100087   79377 main.go:141] libmachine: (calico-673007)   <name>calico-673007</name>
	I0127 12:09:48.100110   79377 main.go:141] libmachine: (calico-673007)   <memory unit='MiB'>3072</memory>
	I0127 12:09:48.100120   79377 main.go:141] libmachine: (calico-673007)   <vcpu>2</vcpu>
	I0127 12:09:48.100127   79377 main.go:141] libmachine: (calico-673007)   <features>
	I0127 12:09:48.100137   79377 main.go:141] libmachine: (calico-673007)     <acpi/>
	I0127 12:09:48.100146   79377 main.go:141] libmachine: (calico-673007)     <apic/>
	I0127 12:09:48.100154   79377 main.go:141] libmachine: (calico-673007)     <pae/>
	I0127 12:09:48.100163   79377 main.go:141] libmachine: (calico-673007)     
	I0127 12:09:48.100172   79377 main.go:141] libmachine: (calico-673007)   </features>
	I0127 12:09:48.100187   79377 main.go:141] libmachine: (calico-673007)   <cpu mode='host-passthrough'>
	I0127 12:09:48.100198   79377 main.go:141] libmachine: (calico-673007)   
	I0127 12:09:48.100208   79377 main.go:141] libmachine: (calico-673007)   </cpu>
	I0127 12:09:48.100216   79377 main.go:141] libmachine: (calico-673007)   <os>
	I0127 12:09:48.100225   79377 main.go:141] libmachine: (calico-673007)     <type>hvm</type>
	I0127 12:09:48.100239   79377 main.go:141] libmachine: (calico-673007)     <boot dev='cdrom'/>
	I0127 12:09:48.100250   79377 main.go:141] libmachine: (calico-673007)     <boot dev='hd'/>
	I0127 12:09:48.100267   79377 main.go:141] libmachine: (calico-673007)     <bootmenu enable='no'/>
	I0127 12:09:48.100283   79377 main.go:141] libmachine: (calico-673007)   </os>
	I0127 12:09:48.100302   79377 main.go:141] libmachine: (calico-673007)   <devices>
	I0127 12:09:48.100325   79377 main.go:141] libmachine: (calico-673007)     <disk type='file' device='cdrom'>
	I0127 12:09:48.100348   79377 main.go:141] libmachine: (calico-673007)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007/boot2docker.iso'/>
	I0127 12:09:48.100360   79377 main.go:141] libmachine: (calico-673007)       <target dev='hdc' bus='scsi'/>
	I0127 12:09:48.100370   79377 main.go:141] libmachine: (calico-673007)       <readonly/>
	I0127 12:09:48.100381   79377 main.go:141] libmachine: (calico-673007)     </disk>
	I0127 12:09:48.100399   79377 main.go:141] libmachine: (calico-673007)     <disk type='file' device='disk'>
	I0127 12:09:48.100417   79377 main.go:141] libmachine: (calico-673007)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:09:48.100435   79377 main.go:141] libmachine: (calico-673007)       <source file='/home/jenkins/minikube-integration/20319-18835/.minikube/machines/calico-673007/calico-673007.rawdisk'/>
	I0127 12:09:48.100446   79377 main.go:141] libmachine: (calico-673007)       <target dev='hda' bus='virtio'/>
	I0127 12:09:48.100457   79377 main.go:141] libmachine: (calico-673007)     </disk>
	I0127 12:09:48.100467   79377 main.go:141] libmachine: (calico-673007)     <interface type='network'>
	I0127 12:09:48.100479   79377 main.go:141] libmachine: (calico-673007)       <source network='mk-calico-673007'/>
	I0127 12:09:48.100493   79377 main.go:141] libmachine: (calico-673007)       <model type='virtio'/>
	I0127 12:09:48.100505   79377 main.go:141] libmachine: (calico-673007)     </interface>
	I0127 12:09:48.100516   79377 main.go:141] libmachine: (calico-673007)     <interface type='network'>
	I0127 12:09:48.100527   79377 main.go:141] libmachine: (calico-673007)       <source network='default'/>
	I0127 12:09:48.100537   79377 main.go:141] libmachine: (calico-673007)       <model type='virtio'/>
	I0127 12:09:48.100548   79377 main.go:141] libmachine: (calico-673007)     </interface>
	I0127 12:09:48.100570   79377 main.go:141] libmachine: (calico-673007)     <serial type='pty'>
	I0127 12:09:48.100582   79377 main.go:141] libmachine: (calico-673007)       <target port='0'/>
	I0127 12:09:48.100589   79377 main.go:141] libmachine: (calico-673007)     </serial>
	I0127 12:09:48.100600   79377 main.go:141] libmachine: (calico-673007)     <console type='pty'>
	I0127 12:09:48.100611   79377 main.go:141] libmachine: (calico-673007)       <target type='serial' port='0'/>
	I0127 12:09:48.100623   79377 main.go:141] libmachine: (calico-673007)     </console>
	I0127 12:09:48.100633   79377 main.go:141] libmachine: (calico-673007)     <rng model='virtio'>
	I0127 12:09:48.100645   79377 main.go:141] libmachine: (calico-673007)       <backend model='random'>/dev/random</backend>
	I0127 12:09:48.100654   79377 main.go:141] libmachine: (calico-673007)     </rng>
	I0127 12:09:48.100663   79377 main.go:141] libmachine: (calico-673007)     
	I0127 12:09:48.100672   79377 main.go:141] libmachine: (calico-673007)     
	I0127 12:09:48.100681   79377 main.go:141] libmachine: (calico-673007)   </devices>
	I0127 12:09:48.100689   79377 main.go:141] libmachine: (calico-673007) </domain>
	I0127 12:09:48.100698   79377 main.go:141] libmachine: (calico-673007) 
	I0127 12:09:48.105046   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:84:0d:f3 in network default
	I0127 12:09:48.105795   79377 main.go:141] libmachine: (calico-673007) starting domain...
	I0127 12:09:48.105818   79377 main.go:141] libmachine: (calico-673007) ensuring networks are active...
	I0127 12:09:48.105830   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:5f:41:cc in network mk-calico-673007
	I0127 12:09:48.106489   79377 main.go:141] libmachine: (calico-673007) Ensuring network default is active
	I0127 12:09:48.106812   79377 main.go:141] libmachine: (calico-673007) Ensuring network mk-calico-673007 is active
	I0127 12:09:48.107290   79377 main.go:141] libmachine: (calico-673007) getting domain XML...
	I0127 12:09:48.108033   79377 main.go:141] libmachine: (calico-673007) creating domain...
	I0127 12:09:49.446375   79377 main.go:141] libmachine: (calico-673007) waiting for IP...
	I0127 12:09:49.447259   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:5f:41:cc in network mk-calico-673007
	I0127 12:09:49.447757   79377 main.go:141] libmachine: (calico-673007) DBG | unable to find current IP address of domain calico-673007 in network mk-calico-673007
	I0127 12:09:49.447792   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:49.447744   79660 retry.go:31] will retry after 274.796425ms: waiting for domain to come up
	I0127 12:09:49.724581   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:5f:41:cc in network mk-calico-673007
	I0127 12:09:49.725216   79377 main.go:141] libmachine: (calico-673007) DBG | unable to find current IP address of domain calico-673007 in network mk-calico-673007
	I0127 12:09:49.725246   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:49.725203   79660 retry.go:31] will retry after 363.721336ms: waiting for domain to come up
	I0127 12:09:50.090886   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:5f:41:cc in network mk-calico-673007
	I0127 12:09:50.091587   79377 main.go:141] libmachine: (calico-673007) DBG | unable to find current IP address of domain calico-673007 in network mk-calico-673007
	I0127 12:09:50.091639   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:50.091503   79660 retry.go:31] will retry after 380.440901ms: waiting for domain to come up
	I0127 12:09:50.474065   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:5f:41:cc in network mk-calico-673007
	I0127 12:09:50.474559   79377 main.go:141] libmachine: (calico-673007) DBG | unable to find current IP address of domain calico-673007 in network mk-calico-673007
	I0127 12:09:50.474590   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:50.474526   79660 retry.go:31] will retry after 450.558278ms: waiting for domain to come up
	I0127 12:09:50.927200   79377 main.go:141] libmachine: (calico-673007) DBG | domain calico-673007 has defined MAC address 52:54:00:5f:41:cc in network mk-calico-673007
	I0127 12:09:50.927732   79377 main.go:141] libmachine: (calico-673007) DBG | unable to find current IP address of domain calico-673007 in network mk-calico-673007
	I0127 12:09:50.927769   79377 main.go:141] libmachine: (calico-673007) DBG | I0127 12:09:50.927685   79660 retry.go:31] will retry after 710.822494ms: waiting for domain to come up
	
	
	==> CRI-O <==
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.556266537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edb7759c-fe98-4212-911e-e86c8c773f28 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.557932599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f1197ae-5f67-4a04-a9c8-a701d3b3719c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.558546978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979792558440750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f1197ae-5f67-4a04-a9c8-a701d3b3719c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.559131928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5d026a4-508e-4d94-be54-5bc2d51f0d03 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.559224690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5d026a4-508e-4d94-be54-5bc2d51f0d03 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.559602204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab,PodSandboxId:45a93305b00899013d0d71b1b8e58ac83bef45042904bfb2ed09670d8beb6801,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979773705357099,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-prqw2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e06566d3-d669-46e4-9ecd-ca9664be4767,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c769edd8580ef47b19635b7bc4e060a5d29d7037886efa19e1426111b071b16b,PodSandboxId:52c0bb9597548a5b3f7fb9e48effac9627ad298984ce920fe9496b954a8ced0a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978532484157556,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s4lsr,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e12693f1-8a4a-4545-9072-67c8b236304f,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbdc527e2487fc7034e52a1e0aab86616d72ce192bcb27124e63f5472c0ae1a,PodSandboxId:955509fbdc454bbf34ddb6149e522e2c4b03fee61cd5c36c8fffe99a414cb610,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978521355415955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58b014bb-8629-4398-a2ec-6ec95fa59111,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c3b1d3cfb3efdd77d5179e4fcac70c7070fc505b6facfcf91eb1921118fa8f,PodSandboxId:a46cbf6eb67f1232043e2fd821cfc8f6a9b72bc5b1a99880cf5565f5ec979bb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978520144725942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdf87,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fc6237-1829-4315-b9cf-3354bd7a96a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6325bd36c067eee428c7803521927f6a6196fe4a0b04c5a2c6297544346c1db,PodSandboxId:e237fb1e86caaa521e33a0fd8d54eef41bd2a7602623a8556422511b7bcb8fb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978519958192171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pd5ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33b4c24-e93a-4370-a289-6dca24315394,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed6814a49405a66651fa16b3df6628dd8b881eb0c4f70af272cbed8a134d6cf,PodSandboxId:8fa24c54407841ced6451a1a3f26003706a9ff360e132c523d0c25d39c48968f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978519150194552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26pw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b18bff33d10928f1003b694a6559d805d27dbcc3d900c7ac2762afac54b3be,PodSandboxId:367027a0e738588df917d8bd27826144ecd341d097a1e64519ca07b33cbb6416,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978509114294761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74e10132ca91cf6cbc4436341964199,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952751f0951d89d65f22626902540c921d59d3877480a4b416b9103cb8e0137b,PodSandboxId:6b04f84b431870eb1e922d87b83a99e4e9dbb567934dbde759deb4d207bd06d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978509114912422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62e070a9cf28916a3860ef6e0fb77479,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21eccd8b6b34a07c70e5ecd8b8b2562b77f25034b298eb6ae46a6dfaa3c4f7c3,PodSandboxId:6cc73d738a3fe323c53e2257f92e68e73cc6530f7422338af0cc4238266968d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978509072720623,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e513d1af57612b3edb7555da126534,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ebecccd9c6383ebbfaf2acfd7ef0e329916565f91bfc56a622dfe775308d47,PodSandboxId:364cb5a56eb28cdc960f96462f68d81ef302532128ffca37712fcc8937c87c40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978509041552773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b09f045811c5975886f1801c909f4aea78b89b8fc65510664955a862a516e4,PodSandboxId:2ad67d6956e8169af2fd4e65c232afced896468ec7e7a72a84bc28f8a06bbd3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978223790696548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5d026a4-508e-4d94-be54-5bc2d51f0d03 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.591767684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e88395fe-dc7c-4167-98af-0b6765407cda name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.591854598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e88395fe-dc7c-4167-98af-0b6765407cda name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.592693152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8b73425-f4a7-4ec4-bca0-14814016e6ed name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.593124171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979792593105004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8b73425-f4a7-4ec4-bca0-14814016e6ed name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.593556846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58c19f70-a26a-4062-bb80-afa3151454ce name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.593626124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58c19f70-a26a-4062-bb80-afa3151454ce name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.593863046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab,PodSandboxId:45a93305b00899013d0d71b1b8e58ac83bef45042904bfb2ed09670d8beb6801,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979773705357099,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-prqw2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e06566d3-d669-46e4-9ecd-ca9664be4767,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c769edd8580ef47b19635b7bc4e060a5d29d7037886efa19e1426111b071b16b,PodSandboxId:52c0bb9597548a5b3f7fb9e48effac9627ad298984ce920fe9496b954a8ced0a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978532484157556,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s4lsr,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e12693f1-8a4a-4545-9072-67c8b236304f,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbdc527e2487fc7034e52a1e0aab86616d72ce192bcb27124e63f5472c0ae1a,PodSandboxId:955509fbdc454bbf34ddb6149e522e2c4b03fee61cd5c36c8fffe99a414cb610,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978521355415955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58b014bb-8629-4398-a2ec-6ec95fa59111,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c3b1d3cfb3efdd77d5179e4fcac70c7070fc505b6facfcf91eb1921118fa8f,PodSandboxId:a46cbf6eb67f1232043e2fd821cfc8f6a9b72bc5b1a99880cf5565f5ec979bb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978520144725942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdf87,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fc6237-1829-4315-b9cf-3354bd7a96a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6325bd36c067eee428c7803521927f6a6196fe4a0b04c5a2c6297544346c1db,PodSandboxId:e237fb1e86caaa521e33a0fd8d54eef41bd2a7602623a8556422511b7bcb8fb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978519958192171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pd5ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33b4c24-e93a-4370-a289-6dca24315394,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed6814a49405a66651fa16b3df6628dd8b881eb0c4f70af272cbed8a134d6cf,PodSandboxId:8fa24c54407841ced6451a1a3f26003706a9ff360e132c523d0c25d39c48968f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978519150194552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26pw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b18bff33d10928f1003b694a6559d805d27dbcc3d900c7ac2762afac54b3be,PodSandboxId:367027a0e738588df917d8bd27826144ecd341d097a1e64519ca07b33cbb6416,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978509114294761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74e10132ca91cf6cbc4436341964199,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952751f0951d89d65f22626902540c921d59d3877480a4b416b9103cb8e0137b,PodSandboxId:6b04f84b431870eb1e922d87b83a99e4e9dbb567934dbde759deb4d207bd06d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978509114912422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62e070a9cf28916a3860ef6e0fb77479,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21eccd8b6b34a07c70e5ecd8b8b2562b77f25034b298eb6ae46a6dfaa3c4f7c3,PodSandboxId:6cc73d738a3fe323c53e2257f92e68e73cc6530f7422338af0cc4238266968d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978509072720623,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e513d1af57612b3edb7555da126534,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ebecccd9c6383ebbfaf2acfd7ef0e329916565f91bfc56a622dfe775308d47,PodSandboxId:364cb5a56eb28cdc960f96462f68d81ef302532128ffca37712fcc8937c87c40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978509041552773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b09f045811c5975886f1801c909f4aea78b89b8fc65510664955a862a516e4,PodSandboxId:2ad67d6956e8169af2fd4e65c232afced896468ec7e7a72a84bc28f8a06bbd3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978223790696548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58c19f70-a26a-4062-bb80-afa3151454ce name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.644075902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e3cf8e8-2b28-46cd-992a-298a9617e569 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.644194514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e3cf8e8-2b28-46cd-992a-298a9617e569 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.645508537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad887987-bda0-4c5d-9a5c-ed403277489e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.646072948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979792646048484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad887987-bda0-4c5d-9a5c-ed403277489e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.646951303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fffcacc-78fb-4f23-8478-4ceb409cf4d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.647028075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fffcacc-78fb-4f23-8478-4ceb409cf4d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.647337854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab,PodSandboxId:45a93305b00899013d0d71b1b8e58ac83bef45042904bfb2ed09670d8beb6801,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979773705357099,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-prqw2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e06566d3-d669-46e4-9ecd-ca9664be4767,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c769edd8580ef47b19635b7bc4e060a5d29d7037886efa19e1426111b071b16b,PodSandboxId:52c0bb9597548a5b3f7fb9e48effac9627ad298984ce920fe9496b954a8ced0a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978532484157556,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s4lsr,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e12693f1-8a4a-4545-9072-67c8b236304f,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbdc527e2487fc7034e52a1e0aab86616d72ce192bcb27124e63f5472c0ae1a,PodSandboxId:955509fbdc454bbf34ddb6149e522e2c4b03fee61cd5c36c8fffe99a414cb610,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978521355415955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58b014bb-8629-4398-a2ec-6ec95fa59111,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c3b1d3cfb3efdd77d5179e4fcac70c7070fc505b6facfcf91eb1921118fa8f,PodSandboxId:a46cbf6eb67f1232043e2fd821cfc8f6a9b72bc5b1a99880cf5565f5ec979bb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978520144725942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdf87,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fc6237-1829-4315-b9cf-3354bd7a96a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6325bd36c067eee428c7803521927f6a6196fe4a0b04c5a2c6297544346c1db,PodSandboxId:e237fb1e86caaa521e33a0fd8d54eef41bd2a7602623a8556422511b7bcb8fb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978519958192171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pd5ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33b4c24-e93a-4370-a289-6dca24315394,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed6814a49405a66651fa16b3df6628dd8b881eb0c4f70af272cbed8a134d6cf,PodSandboxId:8fa24c54407841ced6451a1a3f26003706a9ff360e132c523d0c25d39c48968f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978519150194552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26pw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b18bff33d10928f1003b694a6559d805d27dbcc3d900c7ac2762afac54b3be,PodSandboxId:367027a0e738588df917d8bd27826144ecd341d097a1e64519ca07b33cbb6416,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978509114294761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74e10132ca91cf6cbc4436341964199,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952751f0951d89d65f22626902540c921d59d3877480a4b416b9103cb8e0137b,PodSandboxId:6b04f84b431870eb1e922d87b83a99e4e9dbb567934dbde759deb4d207bd06d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978509114912422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62e070a9cf28916a3860ef6e0fb77479,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21eccd8b6b34a07c70e5ecd8b8b2562b77f25034b298eb6ae46a6dfaa3c4f7c3,PodSandboxId:6cc73d738a3fe323c53e2257f92e68e73cc6530f7422338af0cc4238266968d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978509072720623,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e513d1af57612b3edb7555da126534,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ebecccd9c6383ebbfaf2acfd7ef0e329916565f91bfc56a622dfe775308d47,PodSandboxId:364cb5a56eb28cdc960f96462f68d81ef302532128ffca37712fcc8937c87c40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978509041552773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b09f045811c5975886f1801c909f4aea78b89b8fc65510664955a862a516e4,PodSandboxId:2ad67d6956e8169af2fd4e65c232afced896468ec7e7a72a84bc28f8a06bbd3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978223790696548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fffcacc-78fb-4f23-8478-4ceb409cf4d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.669524447Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d3e0016d-41d4-4e83-bf06-d1515455c719 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.669909717Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:52c0bb9597548a5b3f7fb9e48effac9627ad298984ce920fe9496b954a8ced0a,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-s4lsr,Uid:e12693f1-8a4a-4545-9072-67c8b236304f,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978522090687307,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s4lsr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e12693f1-8a4a-4545-9072-67c8b236304f,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:48:41.782953666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45a93305b00899013d0d71b1b8e58ac83bef45042904bfb2ed09670d8beb6801,Metadata:&PodSandboxMet
adata{Name:dashboard-metrics-scraper-86c6bf9756-prqw2,Uid:e06566d3-d669-46e4-9ecd-ca9664be4767,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978522083766572,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-prqw2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e06566d3-d669-46e4-9ecd-ca9664be4767,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:48:41.776912683Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:c6202b9109191c4a567072e50d923b4d0bd91b817d666768700bf240ba7ef972,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-d7r6d,Uid:6bd8680e-8338-48a2-b29b-a913d195bc9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978521361058582,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: metrics-server-f79f97bbb-d7r6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd8680e-8338-48a2-b29b-a913d195bc9e,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:48:41.054787446Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:955509fbdc454bbf34ddb6149e522e2c4b03fee61cd5c36c8fffe99a414cb610,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:58b014bb-8629-4398-a2ec-6ec95fa59111,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978521074652947,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58b014bb-8629-4398-a2ec-6ec95fa59111,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\
":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-27T11:48:40.760019739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a46cbf6eb67f1232043e2fd821cfc8f6a9b72bc5b1a99880cf5565f5ec979bb0,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-sdf87,Uid:30fc6237-1829-4315-b9cf-3354bd7a96a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978519278066974,Labels:map[string]string{io.kubernetes
.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-sdf87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fc6237-1829-4315-b9cf-3354bd7a96a5,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:48:38.954211906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e237fb1e86caaa521e33a0fd8d54eef41bd2a7602623a8556422511b7bcb8fb0,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-pd5ml,Uid:c33b4c24-e93a-4370-a289-6dca24315394,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978519229218626,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-pd5ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33b4c24-e93a-4370-a289-6dca24315394,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:48:38.922990352Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:8fa24c54407841ced6451a1a3f26003706a9ff360e132c523d0c25d39c48968f,Metadata:&PodSandboxMetadata{Name:kube-proxy-26pw8,Uid:c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978519043881771,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-26pw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T11:48:38.737133408Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:367027a0e738588df917d8bd27826144ecd341d097a1e64519ca07b33cbb6416,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-407489,Uid:d74e10132ca91cf6cbc4436341964199,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978508901151374,Labels:map[string]string{component: kube-controlle
r-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74e10132ca91cf6cbc4436341964199,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d74e10132ca91cf6cbc4436341964199,kubernetes.io/config.seen: 2025-01-27T11:48:28.464343928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:364cb5a56eb28cdc960f96462f68d81ef302532128ffca37712fcc8937c87c40,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-407489,Uid:01c815a4e6fcd8ddb352152105c6df70,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737978508900544192,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,tier: control-plane,},Annotations:map[string]string{k
ubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8444,kubernetes.io/config.hash: 01c815a4e6fcd8ddb352152105c6df70,kubernetes.io/config.seen: 2025-01-27T11:48:28.464342167Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b04f84b431870eb1e922d87b83a99e4e9dbb567934dbde759deb4d207bd06d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-407489,Uid:62e070a9cf28916a3860ef6e0fb77479,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978508898029628,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62e070a9cf28916a3860ef6e0fb77479,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62e070a9cf28916a3860ef6e0fb77479,kubernetes.io/config.seen: 2025-01-27T11:48:28.464344932Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6
cc73d738a3fe323c53e2257f92e68e73cc6530f7422338af0cc4238266968d7,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-407489,Uid:21e513d1af57612b3edb7555da126534,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737978508888824794,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e513d1af57612b3edb7555da126534,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: 21e513d1af57612b3edb7555da126534,kubernetes.io/config.seen: 2025-01-27T11:48:28.464337343Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ad67d6956e8169af2fd4e65c232afced896468ec7e7a72a84bc28f8a06bbd3a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-407489,Uid:01c815a4e6fcd8ddb352152105c6df70,Namespace:kube-system,Attempt:0,},State:SANDB
OX_NOTREADY,CreatedAt:1737978223169992426,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8444,kubernetes.io/config.hash: 01c815a4e6fcd8ddb352152105c6df70,kubernetes.io/config.seen: 2025-01-27T11:43:42.724906268Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d3e0016d-41d4-4e83-bf06-d1515455c719 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.671331656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5113a7dc-7f86-42c5-99b7-f6d0b37c6ef3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.671413846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5113a7dc-7f86-42c5-99b7-f6d0b37c6ef3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:09:52 default-k8s-diff-port-407489 crio[719]: time="2025-01-27 12:09:52.671691017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab,PodSandboxId:45a93305b00899013d0d71b1b8e58ac83bef45042904bfb2ed09670d8beb6801,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737979773705357099,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-prqw2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e06566d3-d669-46e4-9ecd-ca9664be4767,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c769edd8580ef47b19635b7bc4e060a5d29d7037886efa19e1426111b071b16b,PodSandboxId:52c0bb9597548a5b3f7fb9e48effac9627ad298984ce920fe9496b954a8ced0a,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737978532484157556,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s4lsr,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: e12693f1-8a4a-4545-9072-67c8b236304f,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbdc527e2487fc7034e52a1e0aab86616d72ce192bcb27124e63f5472c0ae1a,PodSandboxId:955509fbdc454bbf34ddb6149e522e2c4b03fee61cd5c36c8fffe99a414cb610,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737978521355415955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58b014bb-8629-4398-a2ec-6ec95fa59111,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c3b1d3cfb3efdd77d5179e4fcac70c7070fc505b6facfcf91eb1921118fa8f,PodSandboxId:a46cbf6eb67f1232043e2fd821cfc8f6a9b72bc5b1a99880cf5565f5ec979bb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978520144725942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdf87,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fc6237-1829-4315-b9cf-3354bd7a96a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6325bd36c067eee428c7803521927f6a6196fe4a0b04c5a2c6297544346c1db,PodSandboxId:e237fb1e86caaa521e33a0fd8d54eef41bd2a7602623a8556422511b7bcb8fb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737978519958192171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pd5ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33b4c24-e93a-4370-a289-6dca24315394,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed6814a49405a66651fa16b3df6628dd8b881eb0c4f70af272cbed8a134d6cf,PodSandboxId:8fa24c54407841ced6451a1a3f26003706a9ff360e132c523d0c25d39c48968f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737978519150194552,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26pw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b18bff33d10928f1003b694a6559d805d27dbcc3d900c7ac2762afac54b3be,PodSandboxId:367027a0e738588df917d8bd27826144ecd341d097a1e64519ca07b33cbb6416,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee1
82b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737978509114294761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74e10132ca91cf6cbc4436341964199,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952751f0951d89d65f22626902540c921d59d3877480a4b416b9103cb8e0137b,PodSandboxId:6b04f84b431870eb1e922d87b83a99e4e9dbb567934dbde759deb4d207bd06d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSp
ec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737978509114912422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62e070a9cf28916a3860ef6e0fb77479,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21eccd8b6b34a07c70e5ecd8b8b2562b77f25034b298eb6ae46a6dfaa3c4f7c3,PodSandboxId:6cc73d738a3fe323c53e2257f92e68e73cc6530f7422338af0cc4238266968d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e
7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737978509072720623,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e513d1af57612b3edb7555da126534,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ebecccd9c6383ebbfaf2acfd7ef0e329916565f91bfc56a622dfe775308d47,PodSandboxId:364cb5a56eb28cdc960f96462f68d81ef302532128ffca37712fcc8937c87c40,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15
c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737978509041552773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b09f045811c5975886f1801c909f4aea78b89b8fc65510664955a862a516e4,PodSandboxId:2ad67d6956e8169af2fd4e65c232afced896468ec7e7a72a84bc28f8a06bbd3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737978223790696548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-407489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c815a4e6fcd8ddb352152105c6df70,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5113a7dc-7f86-42c5-99b7-f6d0b37c6ef3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	fd78b8c1fda7d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   9                   45a93305b0089       dashboard-metrics-scraper-86c6bf9756-prqw2
	c769edd8580ef       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   52c0bb9597548       kubernetes-dashboard-7779f9b69b-s4lsr
	3bbdc527e2487       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   955509fbdc454       storage-provisioner
	33c3b1d3cfb3e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   a46cbf6eb67f1       coredns-668d6bf9bc-sdf87
	f6325bd36c067       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   e237fb1e86caa       coredns-668d6bf9bc-pd5ml
	2ed6814a49405       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   8fa24c5440784       kube-proxy-26pw8
	952751f0951d8       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   6b04f84b43187       kube-scheduler-default-k8s-diff-port-407489
	b2b18bff33d10       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   367027a0e7385       kube-controller-manager-default-k8s-diff-port-407489
	21eccd8b6b34a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   6cc73d738a3fe       etcd-default-k8s-diff-port-407489
	30ebecccd9c63       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   364cb5a56eb28       kube-apiserver-default-k8s-diff-port-407489
	f0b09f045811c       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   2ad67d6956e81       kube-apiserver-default-k8s-diff-port-407489
	
	
	==> coredns [33c3b1d3cfb3efdd77d5179e4fcac70c7070fc505b6facfcf91eb1921118fa8f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f6325bd36c067eee428c7803521927f6a6196fe4a0b04c5a2c6297544346c1db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-407489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-407489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=default-k8s-diff-port-407489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_48_34_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:48:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-407489
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:09:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:05:25 +0000   Mon, 27 Jan 2025 11:48:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:05:25 +0000   Mon, 27 Jan 2025 11:48:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:05:25 +0000   Mon, 27 Jan 2025 11:48:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:05:25 +0000   Mon, 27 Jan 2025 11:48:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    default-k8s-diff-port-407489
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf57d950e8404b9d85f9bd7a5bf97ccf
	  System UUID:                bf57d950-e840-4b9d-85f9-bd7a5bf97ccf
	  Boot ID:                    44761b02-2807-4301-9352-e46419799e22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-pd5ml                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-sdf87                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-407489                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-407489             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-407489    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-26pw8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-407489             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-d7r6d                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-prqw2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-s4lsr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node default-k8s-diff-port-407489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node default-k8s-diff-port-407489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node default-k8s-diff-port-407489 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node default-k8s-diff-port-407489 event: Registered Node default-k8s-diff-port-407489 in Controller
	
	
	==> dmesg <==
	[  +0.038100] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.894742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.964231] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571937] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.310258] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.062015] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062789] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.168706] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.146509] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.255465] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +3.906536] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +2.387098] systemd-fstab-generator[924]: Ignoring "noauto" option for root device
	[  +0.061190] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.618589] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.912788] kauditd_printk_skb: 87 callbacks suppressed
	[Jan27 11:48] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.062582] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.493730] systemd-fstab-generator[3007]: Ignoring "noauto" option for root device
	[  +0.086280] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.566799] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.375909] systemd-fstab-generator[3269]: Ignoring "noauto" option for root device
	[  +6.737046] kauditd_printk_skb: 112 callbacks suppressed
	[  +6.326115] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [21eccd8b6b34a07c70e5ecd8b8b2562b77f25034b298eb6ae46a6dfaa3c4f7c3] <==
	{"level":"info","ts":"2025-01-27T12:08:18.535514Z","caller":"traceutil/trace.go:171","msg":"trace[1734208714] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1589; }","duration":"104.936264ms","start":"2025-01-27T12:08:18.430568Z","end":"2025-01-27T12:08:18.535505Z","steps":["trace[1734208714] 'agreement among raft nodes before linearized reading'  (duration: 104.814705ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:08:18.535442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.805893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:08:18.536370Z","caller":"traceutil/trace.go:171","msg":"trace[633624688] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1589; }","duration":"202.869918ms","start":"2025-01-27T12:08:18.333453Z","end":"2025-01-27T12:08:18.536323Z","steps":["trace[633624688] 'agreement among raft nodes before linearized reading'  (duration: 201.877627ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:08:29.760782Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1348}
	{"level":"info","ts":"2025-01-27T12:08:29.764307Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1348,"took":"3.018468ms","hash":3187045420,"current-db-size-bytes":3067904,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1761280,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T12:08:29.764382Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3187045420,"revision":1348,"compact-revision":1096}
	{"level":"info","ts":"2025-01-27T12:08:42.255188Z","caller":"traceutil/trace.go:171","msg":"trace[729478152] transaction","detail":"{read_only:false; response_revision:1608; number_of_response:1; }","duration":"236.851492ms","start":"2025-01-27T12:08:42.018314Z","end":"2025-01-27T12:08:42.255166Z","steps":["trace[729478152] 'process raft request'  (duration: 236.720714ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:08:42.255543Z","caller":"traceutil/trace.go:171","msg":"trace[735214095] linearizableReadLoop","detail":"{readStateIndex:1870; appliedIndex:1869; }","duration":"121.905904ms","start":"2025-01-27T12:08:42.133624Z","end":"2025-01-27T12:08:42.255530Z","steps":["trace[735214095] 'read index received'  (duration: 121.337612ms)","trace[735214095] 'applied index is now lower than readState.Index'  (duration: 567.461µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T12:08:42.255627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.993874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:08:42.255662Z","caller":"traceutil/trace.go:171","msg":"trace[1047039035] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1608; }","duration":"122.093863ms","start":"2025-01-27T12:08:42.133560Z","end":"2025-01-27T12:08:42.255654Z","steps":["trace[1047039035] 'agreement among raft nodes before linearized reading'  (duration: 122.034097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:08:42.464572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.429824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:08:42.464640Z","caller":"traceutil/trace.go:171","msg":"trace[830678579] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1608; }","duration":"131.528469ms","start":"2025-01-27T12:08:42.333099Z","end":"2025-01-27T12:08:42.464627Z","steps":["trace[830678579] 'range keys from in-memory index tree'  (duration: 131.379516ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:09:10.722121Z","caller":"traceutil/trace.go:171","msg":"trace[775541506] linearizableReadLoop","detail":"{readStateIndex:1903; appliedIndex:1902; }","duration":"292.737111ms","start":"2025-01-27T12:09:10.429364Z","end":"2025-01-27T12:09:10.722101Z","steps":["trace[775541506] 'read index received'  (duration: 292.51446ms)","trace[775541506] 'applied index is now lower than readState.Index'  (duration: 221.93µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:09:10.722528Z","caller":"traceutil/trace.go:171","msg":"trace[1903344342] transaction","detail":"{read_only:false; response_revision:1634; number_of_response:1; }","duration":"317.498628ms","start":"2025-01-27T12:09:10.405012Z","end":"2025-01-27T12:09:10.722511Z","steps":["trace[1903344342] 'process raft request'  (duration: 316.926259ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:10.722565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.218575ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T12:09:10.722683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:09:10.404996Z","time spent":"317.617223ms","remote":"127.0.0.1:55690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1633 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T12:09:10.722704Z","caller":"traceutil/trace.go:171","msg":"trace[1986558004] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1634; }","duration":"190.400467ms","start":"2025-01-27T12:09:10.532291Z","end":"2025-01-27T12:09:10.722691Z","steps":["trace[1986558004] 'agreement among raft nodes before linearized reading'  (duration: 190.155471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:10.722962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.605498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:10.723047Z","caller":"traceutil/trace.go:171","msg":"trace[244636911] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1634; }","duration":"293.711153ms","start":"2025-01-27T12:09:10.429328Z","end":"2025-01-27T12:09:10.723039Z","steps":["trace[244636911] 'agreement among raft nodes before linearized reading'  (duration: 293.603535ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:11.169422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.75111ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:11.170091Z","caller":"traceutil/trace.go:171","msg":"trace[728246634] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1634; }","duration":"281.423426ms","start":"2025-01-27T12:09:10.888644Z","end":"2025-01-27T12:09:11.170068Z","steps":["trace[728246634] 'range keys from in-memory index tree'  (duration: 280.73784ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:11.169824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.635314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:11.170352Z","caller":"traceutil/trace.go:171","msg":"trace[315986714] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1634; }","duration":"239.197081ms","start":"2025-01-27T12:09:10.931145Z","end":"2025-01-27T12:09:11.170342Z","steps":["trace[315986714] 'range keys from in-memory index tree'  (duration: 238.552626ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:09:13.013626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.927125ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:09:13.014149Z","caller":"traceutil/trace.go:171","msg":"trace[456652284] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1635; }","duration":"126.456642ms","start":"2025-01-27T12:09:12.887678Z","end":"2025-01-27T12:09:13.014135Z","steps":["trace[456652284] 'range keys from in-memory index tree'  (duration: 125.916872ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:09:53 up 26 min,  0 users,  load average: 0.23, 0.22, 0.20
	Linux default-k8s-diff-port-407489 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [30ebecccd9c6383ebbfaf2acfd7ef0e329916565f91bfc56a622dfe775308d47] <==
	I0127 12:06:32.306327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:06:32.306424       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:08:31.305288       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:31.305378       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 12:08:32.306695       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:32.306770       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:08:32.306920       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:08:32.307039       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:08:32.307911       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:08:32.309027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 12:09:32.308960       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 12:09:32.309016       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 12:09:32.310138       1 handler_proxy.go:99] no RequestInfo found in the context
	I0127 12:09:32.310235       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0127 12:09:32.310236       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 12:09:32.311659       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f0b09f045811c5975886f1801c909f4aea78b89b8fc65510664955a862a516e4] <==
	W0127 11:48:23.499003       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.533697       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.562927       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.564367       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.565583       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.570915       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.641646       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.647386       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.649822       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.708172       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.748003       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.753355       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.756707       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.825580       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.849328       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.852730       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.898611       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.902111       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:23.932644       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:24.066432       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:24.090918       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:24.091024       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:24.160708       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:24.228980       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 11:48:24.263876       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b2b18bff33d10928f1003b694a6559d805d27dbcc3d900c7ac2762afac54b3be] <==
	I0127 12:04:56.702432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="92.31µs"
	E0127 12:05:08.027944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:08.092136       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:05:25.755567       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-407489"
	E0127 12:05:38.033455       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:05:38.099942       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:08.038902       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:08.106632       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:06:38.045650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:06:38.114940       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:08.051092       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:08.121588       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:07:38.056841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:07:38.128304       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:08:08.062525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:08.136834       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:08:38.069818       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:08:38.145864       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 12:09:08.077307       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:09:08.155594       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:09:34.629003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="134.51µs"
	I0127 12:09:35.633804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="274.868µs"
	E0127 12:09:38.082502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 12:09:38.162823       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 12:09:47.703585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="61.131µs"
	
	
	==> kube-proxy [2ed6814a49405a66651fa16b3df6628dd8b881eb0c4f70af272cbed8a134d6cf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:48:39.457368       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:48:39.494915       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0127 11:48:39.494982       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:48:39.599656       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:48:39.599750       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:48:39.599816       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:48:39.603662       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:48:39.603904       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:48:39.603918       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:48:39.605116       1 config.go:199] "Starting service config controller"
	I0127 11:48:39.605161       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:48:39.605183       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:48:39.605187       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:48:39.608701       1 config.go:329] "Starting node config controller"
	I0127 11:48:39.608750       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:48:39.705241       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:48:39.705284       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:48:39.709650       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [952751f0951d89d65f22626902540c921d59d3877480a4b416b9103cb8e0137b] <==
	W0127 11:48:31.354975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:48:31.355220       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:31.355304       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:48:31.356085       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.243053       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 11:48:32.243172       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.444525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:48:32.444617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.494628       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:48:32.495094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.524577       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:48:32.524664       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.557672       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:48:32.557771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.580290       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 11:48:32.580382       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.603782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 11:48:32.603910       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.639852       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:48:32.639968       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.656154       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:48:32.656255       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:48:32.665390       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:48:32.665558       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 11:48:34.417814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:09:24 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:24.089967    3014 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979764089448204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:24 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:24.090266    3014 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979764089448204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:24 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:24.688952    3014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-d7r6d" podUID="6bd8680e-8338-48a2-b29b-a913d195bc9e"
	Jan 27 12:09:33 default-k8s-diff-port-407489 kubelet[3014]: I0127 12:09:33.687909    3014 scope.go:117] "RemoveContainer" containerID="55755ba891a69b16ebf4c12ee721249b7643e5c60b0ec19e4534274847dfc919"
	Jan 27 12:09:33 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:33.746341    3014 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:09:33 default-k8s-diff-port-407489 kubelet[3014]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:09:33 default-k8s-diff-port-407489 kubelet[3014]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:09:33 default-k8s-diff-port-407489 kubelet[3014]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:09:33 default-k8s-diff-port-407489 kubelet[3014]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:09:34 default-k8s-diff-port-407489 kubelet[3014]: I0127 12:09:34.072964    3014 scope.go:117] "RemoveContainer" containerID="55755ba891a69b16ebf4c12ee721249b7643e5c60b0ec19e4534274847dfc919"
	Jan 27 12:09:34 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:34.092900    3014 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979774092539982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:34 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:34.092932    3014 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979774092539982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:34 default-k8s-diff-port-407489 kubelet[3014]: I0127 12:09:34.610903    3014 scope.go:117] "RemoveContainer" containerID="fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab"
	Jan 27 12:09:34 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:34.611339    3014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-prqw2_kubernetes-dashboard(e06566d3-d669-46e4-9ecd-ca9664be4767)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-prqw2" podUID="e06566d3-d669-46e4-9ecd-ca9664be4767"
	Jan 27 12:09:35 default-k8s-diff-port-407489 kubelet[3014]: I0127 12:09:35.613774    3014 scope.go:117] "RemoveContainer" containerID="fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab"
	Jan 27 12:09:35 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:35.613927    3014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-prqw2_kubernetes-dashboard(e06566d3-d669-46e4-9ecd-ca9664be4767)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-prqw2" podUID="e06566d3-d669-46e4-9ecd-ca9664be4767"
	Jan 27 12:09:35 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:35.727895    3014 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:09:35 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:35.728069    3014 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 12:09:35 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:35.729024    3014 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jp5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-d7r6d_kube-system(6bd8680e-8338-48a2-b29b-a913d195bc9e): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 12:09:35 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:35.731057    3014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-d7r6d" podUID="6bd8680e-8338-48a2-b29b-a913d195bc9e"
	Jan 27 12:09:44 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:44.095150    3014 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979784094762925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:44 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:44.095197    3014 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979784094762925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:09:47 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:47.688202    3014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-d7r6d" podUID="6bd8680e-8338-48a2-b29b-a913d195bc9e"
	Jan 27 12:09:49 default-k8s-diff-port-407489 kubelet[3014]: I0127 12:09:49.687813    3014 scope.go:117] "RemoveContainer" containerID="fd78b8c1fda7decdc2b226ba50004c438373ef4bf7184577a2fc1fe7820c06ab"
	Jan 27 12:09:49 default-k8s-diff-port-407489 kubelet[3014]: E0127 12:09:49.688040    3014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-prqw2_kubernetes-dashboard(e06566d3-d669-46e4-9ecd-ca9664be4767)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-prqw2" podUID="e06566d3-d669-46e4-9ecd-ca9664be4767"
	
	
	==> kubernetes-dashboard [c769edd8580ef47b19635b7bc4e060a5d29d7037886efa19e1426111b071b16b] <==
	2025/01/27 11:57:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:58:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 11:59:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:01:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:02:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:03:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:04:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:05:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:06:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:07:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:08:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:09:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:09:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3bbdc527e2487fc7034e52a1e0aab86616d72ce192bcb27124e63f5472c0ae1a] <==
	I0127 11:48:41.494206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:48:41.543018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:48:41.543078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:48:41.561187       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:48:41.561335       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-407489_e3150850-63ea-423d-882f-9ae42936d8d4!
	I0127 11:48:41.565515       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f17c4c34-2c4c-4c03-ae01-16b91962a4b5", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-407489_e3150850-63ea-423d-882f-9ae42936d8d4 became leader
	I0127 11:48:41.662643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-407489_e3150850-63ea-423d-882f-9ae42936d8d4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407489 -n default-k8s-diff-port-407489
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-407489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-d7r6d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-407489 describe pod metrics-server-f79f97bbb-d7r6d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-407489 describe pod metrics-server-f79f97bbb-d7r6d: exit status 1 (77.18586ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-d7r6d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-407489 describe pod metrics-server-f79f97bbb-d7r6d: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1598.04s)
E0127 12:11:55.066132   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-570778 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 11:44:26.924570   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:45:50.005724   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:47:34.556414   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:49:26.925030   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:52:34.559846   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-570778 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m25.461051642s)

                                                
                                                
-- stdout --
	* [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-570778" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:44:15.929598   70686 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:44:15.929689   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929697   70686 out.go:358] Setting ErrFile to fd 2...
	I0127 11:44:15.929701   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929887   70686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:44:15.930463   70686 out.go:352] Setting JSON to false
	I0127 11:44:15.931400   70686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8756,"bootTime":1737969500,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:44:15.931492   70686 start.go:139] virtualization: kvm guest
	I0127 11:44:15.933961   70686 out.go:177] * [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:44:15.935491   70686 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:44:15.935496   70686 notify.go:220] Checking for updates...
	I0127 11:44:15.938050   70686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:44:15.939411   70686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:15.940688   70686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:44:15.942034   70686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:44:15.943410   70686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:44:15.945138   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:15.945529   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.945574   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.962483   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0127 11:44:15.963003   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.963519   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.963555   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.963966   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.964195   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:15.965767   70686 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:44:15.966927   70686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:44:15.967285   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.967321   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.981938   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0127 11:44:15.982353   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.982892   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.982918   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.983289   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.984121   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.021180   70686 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:44:16.022570   70686 start.go:297] selected driver: kvm2
	I0127 11:44:16.022584   70686 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
70778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.022687   70686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:44:16.023358   70686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.023431   70686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:44:16.038219   70686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:44:16.038645   70686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:44:16.038674   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:16.038706   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:16.038739   70686 start.go:340] cluster config:
	{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.038822   70686 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.041030   70686 out.go:177] * Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	I0127 11:44:16.042127   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:16.042176   70686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:44:16.042189   70686 cache.go:56] Caching tarball of preloaded images
	I0127 11:44:16.042300   70686 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:44:16.042314   70686 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 11:44:16.042429   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:16.042632   70686 start.go:360] acquireMachinesLock for old-k8s-version-570778: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:44:16.042691   70686 start.go:364] duration metric: took 36.964µs to acquireMachinesLock for "old-k8s-version-570778"
	I0127 11:44:16.042707   70686 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:44:16.042713   70686 fix.go:54] fixHost starting: 
	I0127 11:44:16.043141   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:16.043185   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:16.057334   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0127 11:44:16.057814   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:16.058319   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:16.058342   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:16.059617   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:16.060717   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.060891   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetState
	I0127 11:44:16.062560   70686 fix.go:112] recreateIfNeeded on old-k8s-version-570778: state=Stopped err=<nil>
	I0127 11:44:16.062584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	W0127 11:44:16.062740   70686 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:44:16.064407   70686 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570778" ...
	I0127 11:44:16.065876   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .Start
	I0127 11:44:16.066119   70686 main.go:141] libmachine: (old-k8s-version-570778) starting domain...
	I0127 11:44:16.066142   70686 main.go:141] libmachine: (old-k8s-version-570778) ensuring networks are active...
	I0127 11:44:16.066789   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network default is active
	I0127 11:44:16.067106   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network mk-old-k8s-version-570778 is active
	I0127 11:44:16.067438   70686 main.go:141] libmachine: (old-k8s-version-570778) getting domain XML...
	I0127 11:44:16.068030   70686 main.go:141] libmachine: (old-k8s-version-570778) creating domain...
	I0127 11:44:17.326422   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for IP...
	I0127 11:44:17.327356   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.327887   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.327973   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.327883   70721 retry.go:31] will retry after 224.653843ms: waiting for domain to come up
	I0127 11:44:17.554516   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.555006   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.555033   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.554963   70721 retry.go:31] will retry after 278.652732ms: waiting for domain to come up
	I0127 11:44:17.835676   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.836235   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.836263   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.836216   70721 retry.go:31] will retry after 413.765366ms: waiting for domain to come up
	I0127 11:44:18.251786   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.252318   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.252359   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.252291   70721 retry.go:31] will retry after 384.166802ms: waiting for domain to come up
	I0127 11:44:18.637567   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.638099   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.638123   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.638055   70721 retry.go:31] will retry after 472.449239ms: waiting for domain to come up
	I0127 11:44:19.112411   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.112876   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.112900   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.112842   70721 retry.go:31] will retry after 883.60392ms: waiting for domain to come up
	I0127 11:44:19.997950   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.998399   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.998421   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.998373   70721 retry.go:31] will retry after 736.173761ms: waiting for domain to come up
	I0127 11:44:20.736442   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:20.736964   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:20.737021   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:20.736930   70721 retry.go:31] will retry after 1.379977469s: waiting for domain to come up
	I0127 11:44:22.118774   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:22.119315   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:22.119346   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:22.119278   70721 retry.go:31] will retry after 1.846963021s: waiting for domain to come up
	I0127 11:44:23.968284   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:23.968756   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:23.968788   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:23.968709   70721 retry.go:31] will retry after 1.595738144s: waiting for domain to come up
	I0127 11:44:25.565970   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:25.566464   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:25.566496   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:25.566430   70721 retry.go:31] will retry after 2.837671431s: waiting for domain to come up
	I0127 11:44:28.405715   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:28.406305   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:28.406335   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:28.406277   70721 retry.go:31] will retry after 3.421231106s: waiting for domain to come up
	I0127 11:44:31.828582   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:31.829032   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:31.829085   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:31.829004   70721 retry.go:31] will retry after 3.418527811s: waiting for domain to come up
	I0127 11:44:35.249695   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250229   70686 main.go:141] libmachine: (old-k8s-version-570778) found domain IP: 192.168.50.193
	I0127 11:44:35.250264   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has current primary IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250273   70686 main.go:141] libmachine: (old-k8s-version-570778) reserving static IP address...
	I0127 11:44:35.250765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.250797   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | skip adding static IP to network mk-old-k8s-version-570778 - found existing host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"}
	I0127 11:44:35.250814   70686 main.go:141] libmachine: (old-k8s-version-570778) reserved static IP address 192.168.50.193 for domain old-k8s-version-570778
	I0127 11:44:35.250832   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for SSH...
	I0127 11:44:35.250848   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Getting to WaitForSSH function...
	I0127 11:44:35.253216   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253538   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.253571   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253691   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH client type: external
	I0127 11:44:35.253719   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa (-rw-------)
	I0127 11:44:35.253750   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:44:35.253765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | About to run SSH command:
	I0127 11:44:35.253782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | exit 0
	I0127 11:44:35.375237   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | SSH cmd err, output: <nil>: 
	I0127 11:44:35.375580   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:44:35.376204   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.378824   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379163   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.379195   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379421   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:35.379692   70686 machine.go:93] provisionDockerMachine start ...
	I0127 11:44:35.379720   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:35.379910   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.382057   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382361   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.382392   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382559   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.382738   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.382901   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.383079   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.383243   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.383528   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.383542   70686 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:44:35.483536   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:44:35.483585   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.483889   70686 buildroot.go:166] provisioning hostname "old-k8s-version-570778"
	I0127 11:44:35.483924   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.484119   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.487189   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487543   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.487569   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487813   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.488019   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488147   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488310   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.488454   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.488629   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.488641   70686 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570778 && echo "old-k8s-version-570778" | sudo tee /etc/hostname
	I0127 11:44:35.606107   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570778
	
	I0127 11:44:35.606140   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.609822   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610293   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.610329   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610472   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.610663   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610815   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610983   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.611167   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.611325   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.611342   70686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570778/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:44:35.720742   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:44:35.720779   70686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:44:35.720803   70686 buildroot.go:174] setting up certificates
	I0127 11:44:35.720814   70686 provision.go:84] configureAuth start
	I0127 11:44:35.720826   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.721065   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.723782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724254   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.724290   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724483   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.726871   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.727196   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727322   70686 provision.go:143] copyHostCerts
	I0127 11:44:35.727369   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:44:35.727384   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:44:35.727452   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:44:35.727537   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:44:35.727545   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:44:35.727569   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:44:35.727649   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:44:35.727659   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:44:35.727686   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:44:35.727741   70686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570778 san=[127.0.0.1 192.168.50.193 localhost minikube old-k8s-version-570778]
	I0127 11:44:35.901422   70686 provision.go:177] copyRemoteCerts
	I0127 11:44:35.901473   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:44:35.901501   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.904015   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904354   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.904378   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904597   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.904771   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.904967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.905126   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:35.985261   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:44:36.008090   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:44:36.031357   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:44:36.053784   70686 provision.go:87] duration metric: took 332.958985ms to configureAuth
	I0127 11:44:36.053812   70686 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:44:36.053986   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:36.054066   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.056825   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.057186   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057398   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.057612   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057801   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.058191   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.058400   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.058425   70686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:44:36.280974   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:44:36.281007   70686 machine.go:96] duration metric: took 901.295604ms to provisionDockerMachine
	I0127 11:44:36.281020   70686 start.go:293] postStartSetup for "old-k8s-version-570778" (driver="kvm2")
	I0127 11:44:36.281033   70686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:44:36.281048   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.281334   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:44:36.281366   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.283980   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284452   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.284493   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284602   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.284759   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.284915   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.285033   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.361994   70686 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:44:36.366066   70686 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:44:36.366085   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:44:36.366142   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:44:36.366211   70686 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:44:36.366293   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:44:36.374729   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:36.396427   70686 start.go:296] duration metric: took 115.392742ms for postStartSetup
	I0127 11:44:36.396468   70686 fix.go:56] duration metric: took 20.353754717s for fixHost
	I0127 11:44:36.396491   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.399680   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400070   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.400097   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400246   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.400438   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400591   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400821   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.401019   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.401189   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.401200   70686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:44:36.500185   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978276.474640374
	
	I0127 11:44:36.500211   70686 fix.go:216] guest clock: 1737978276.474640374
	I0127 11:44:36.500221   70686 fix.go:229] Guest: 2025-01-27 11:44:36.474640374 +0000 UTC Remote: 2025-01-27 11:44:36.396473102 +0000 UTC m=+20.504127240 (delta=78.167272ms)
	I0127 11:44:36.500239   70686 fix.go:200] guest clock delta is within tolerance: 78.167272ms
	I0127 11:44:36.500256   70686 start.go:83] releasing machines lock for "old-k8s-version-570778", held for 20.457556974s
	I0127 11:44:36.500274   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.500555   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:36.503395   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503819   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.503860   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503969   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504404   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504676   70686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:44:36.504723   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.504778   70686 ssh_runner.go:195] Run: cat /version.json
	I0127 11:44:36.504802   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.507787   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.507815   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508140   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508175   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508207   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508225   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508347   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508547   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508557   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508735   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.508749   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508887   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.509027   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.509185   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.584389   70686 ssh_runner.go:195] Run: systemctl --version
	I0127 11:44:36.606466   70686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:44:36.746477   70686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:44:36.751936   70686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:44:36.751996   70686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:44:36.768698   70686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:44:36.768722   70686 start.go:495] detecting cgroup driver to use...
	I0127 11:44:36.768788   70686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:44:36.786842   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:44:36.799832   70686 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:44:36.799893   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:44:36.813751   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:44:36.827731   70686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:44:36.943310   70686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:44:37.088722   70686 docker.go:233] disabling docker service ...
	I0127 11:44:37.088789   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:44:37.103240   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:44:37.116205   70686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:44:37.254006   70686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:44:37.365382   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:44:37.379019   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:44:37.396330   70686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 11:44:37.396405   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.406845   70686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:44:37.406919   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.417968   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.428079   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.438133   70686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:44:37.448951   70686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:44:37.458320   70686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:44:37.458382   70686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:44:37.476279   70686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:44:37.486232   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:37.609635   70686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:44:37.703117   70686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:44:37.703185   70686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:44:37.707780   70686 start.go:563] Will wait 60s for crictl version
	I0127 11:44:37.707827   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:37.711561   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:44:37.746285   70686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:44:37.746370   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.774346   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.804220   70686 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 11:44:37.805652   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:37.808777   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809130   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:37.809168   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809355   70686 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:44:37.813621   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:37.826271   70686 kubeadm.go:883] updating cluster {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:44:37.826370   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:37.826406   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:37.875128   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:37.875204   70686 ssh_runner.go:195] Run: which lz4
	I0127 11:44:37.879162   70686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:44:37.883378   70686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:44:37.883408   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:44:39.317688   70686 crio.go:462] duration metric: took 1.438551878s to copy over tarball
	I0127 11:44:39.317750   70686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:44:42.264081   70686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946305063s)
	I0127 11:44:42.264109   70686 crio.go:469] duration metric: took 2.946394656s to extract the tarball
	I0127 11:44:42.264117   70686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:44:42.307411   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:42.344143   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:42.344169   70686 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:44:42.344233   70686 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.344271   70686 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.344279   70686 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.344249   70686 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.344344   70686 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.344362   70686 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:44:42.344415   70686 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.344314   70686 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.345773   70686 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.346448   70686 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.346465   70686 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.346547   70686 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.488970   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.490931   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.497125   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.504183   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.508337   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.519103   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.523858   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:44:42.600152   70686 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:44:42.600208   70686 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.600258   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629803   70686 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:44:42.629847   70686 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.629897   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629956   70686 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:44:42.629990   70686 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.630029   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656649   70686 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:44:42.656693   70686 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.656693   70686 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:44:42.656723   70686 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.656736   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656763   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.669267   70686 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:44:42.669313   70686 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.669350   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677774   70686 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:44:42.677823   70686 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:44:42.677876   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.677890   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677969   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.677987   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.678027   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.678039   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.678069   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.787131   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.787197   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.787314   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.813675   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.816360   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.816416   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.816437   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.930195   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.930298   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.930333   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.930346   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.971335   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.971389   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.971398   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:43.068772   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:44:43.068871   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:43.068882   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:44:43.068892   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:44:43.097755   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:44:43.097781   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:44:43.099343   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:44:43.116136   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:44:43.303986   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:43.439716   70686 cache_images.go:92] duration metric: took 1.095530522s to LoadCachedImages
	W0127 11:44:43.439813   70686 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 11:44:43.439832   70686 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.20.0 crio true true} ...
	I0127 11:44:43.439974   70686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570778 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:44:43.440069   70686 ssh_runner.go:195] Run: crio config
	I0127 11:44:43.491732   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:43.491754   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:43.491765   70686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:44:43.491782   70686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570778 NodeName:old-k8s-version-570778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:44:43.491897   70686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570778"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:44:43.491951   70686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:44:43.501539   70686 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:44:43.501593   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:44:43.510444   70686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 11:44:43.526994   70686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:44:43.542977   70686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:44:43.559986   70686 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 11:44:43.564089   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:43.576120   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:43.702431   70686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:44:43.719740   70686 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778 for IP: 192.168.50.193
	I0127 11:44:43.719759   70686 certs.go:194] generating shared ca certs ...
	I0127 11:44:43.719773   70686 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:43.719941   70686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:44:43.720011   70686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:44:43.720024   70686 certs.go:256] generating profile certs ...
	I0127 11:44:43.810274   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key
	I0127 11:44:43.810422   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f
	I0127 11:44:43.810480   70686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key
	I0127 11:44:43.810641   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:44:43.810684   70686 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:44:43.810697   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:44:43.810727   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:44:43.810761   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:44:43.810789   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:44:43.810838   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:43.811665   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:44:43.856247   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:44:43.898135   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:44:43.938193   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:44:43.960927   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:44:43.984028   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:44:44.008415   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:44:44.030915   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:44:44.055340   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:44:44.077556   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:44:44.101525   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:44:44.124400   70686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:44:44.140292   70686 ssh_runner.go:195] Run: openssl version
	I0127 11:44:44.145827   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:44:44.155834   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.159949   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.160022   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.165584   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:44:44.178174   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:44:44.189759   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.194947   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.195006   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.200696   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:44:44.211199   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:44:44.221194   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225257   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225297   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.230582   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:44:44.240578   70686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:44:44.245082   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:44:44.252016   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:44:44.257760   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:44:44.264902   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:44:44.270934   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:44:44.276642   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:44:44.282062   70686 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:44.282152   70686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:44:44.282190   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.318594   70686 cri.go:89] found id: ""
	I0127 11:44:44.318650   70686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:44:44.328642   70686 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:44:44.328665   70686 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:44:44.328716   70686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:44:44.337760   70686 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:44:44.338436   70686 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:44.338787   70686 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570778" cluster setting kubeconfig missing "old-k8s-version-570778" context setting]
	I0127 11:44:44.339275   70686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:44.379353   70686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:44:44.389831   70686 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0127 11:44:44.389864   70686 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:44:44.389876   70686 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:44:44.389917   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.429276   70686 cri.go:89] found id: ""
	I0127 11:44:44.429352   70686 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:44:44.446502   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:44:44.456332   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:44:44.456358   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:44:44.456406   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:44:44.465009   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:44:44.465064   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:44:44.474468   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:44:44.483271   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:44:44.483333   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:44:44.493091   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.501826   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:44:44.501887   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.511619   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:44:44.520146   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:44:44.520215   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:44:44.529284   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:44:44.538474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:44.669112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.430626   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.649318   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.747035   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.834253   70686 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:44:45.834345   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.334836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.834834   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.334682   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.834945   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.335112   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.834442   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.335101   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.835321   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.334868   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.835371   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.335142   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.835388   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.334604   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.835044   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.334680   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.834411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.335010   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.834554   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.335128   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.335140   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.835042   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.334817   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.834443   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.334777   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.835437   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.334852   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.834590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.335351   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.835115   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.334828   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.834481   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.334592   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.834653   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.335201   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.834728   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.334872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.835121   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:06.335002   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:06.835393   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.334717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.835225   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.335465   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.835195   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.335007   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.835362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.334590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.835441   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:11.334541   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:11.835283   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.335343   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.834836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.335067   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.834637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.334394   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.834608   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.835178   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.334479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.835000   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.335139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.835227   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.335309   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.835170   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.334384   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.835348   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.334845   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.835383   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.335090   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.834734   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.335362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.834567   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.335485   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.835040   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.334533   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.834544   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.334975   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.834941   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.334897   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.834607   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.334771   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.335354   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.834876   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.335076   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.334594   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.834603   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.335153   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.834967   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.335109   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.834477   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.335107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.835110   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.334563   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.835358   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.334401   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.835107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:36.335163   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:36.835139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.334510   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.834447   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.334776   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.834844   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.334806   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.835253   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.334905   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.834948   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:41.334866   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:41.834518   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.335359   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.834415   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.335098   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.834540   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.335306   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.834575   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.335244   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.835032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:45.835116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:45.868609   70686 cri.go:89] found id: ""
	I0127 11:45:45.868640   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.868652   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:45.868659   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:45.868718   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:45.907767   70686 cri.go:89] found id: ""
	I0127 11:45:45.907796   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.907805   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:45.907812   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:45.907870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:45.940736   70686 cri.go:89] found id: ""
	I0127 11:45:45.940781   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.940791   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:45.940800   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:45.940945   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:45.972511   70686 cri.go:89] found id: ""
	I0127 11:45:45.972536   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.972544   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:45.972550   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:45.972621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:46.004929   70686 cri.go:89] found id: ""
	I0127 11:45:46.004958   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.004966   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:46.004971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:46.005020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:46.037172   70686 cri.go:89] found id: ""
	I0127 11:45:46.037205   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.037217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:46.037224   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:46.037284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:46.070282   70686 cri.go:89] found id: ""
	I0127 11:45:46.070311   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.070322   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:46.070330   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:46.070387   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:46.106109   70686 cri.go:89] found id: ""
	I0127 11:45:46.106139   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.106150   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:46.106163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:46.106176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:46.147686   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:46.147719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:46.199085   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:46.199119   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:46.212487   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:46.212515   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:46.331675   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:46.331698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:46.331710   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:48.902413   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:48.915872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:48.915933   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:48.950168   70686 cri.go:89] found id: ""
	I0127 11:45:48.950215   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.950223   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:48.950229   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:48.950280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:48.981915   70686 cri.go:89] found id: ""
	I0127 11:45:48.981947   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.981958   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:48.981966   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:48.982030   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:49.022418   70686 cri.go:89] found id: ""
	I0127 11:45:49.022448   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.022461   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:49.022468   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:49.022531   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:49.066138   70686 cri.go:89] found id: ""
	I0127 11:45:49.066164   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.066174   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:49.066181   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:49.066240   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:49.107856   70686 cri.go:89] found id: ""
	I0127 11:45:49.107887   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.107895   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:49.107901   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:49.107951   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:49.158460   70686 cri.go:89] found id: ""
	I0127 11:45:49.158492   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.158519   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:49.158545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:49.158608   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:49.194805   70686 cri.go:89] found id: ""
	I0127 11:45:49.194831   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.194839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:49.194844   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:49.194889   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:49.227445   70686 cri.go:89] found id: ""
	I0127 11:45:49.227475   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.227483   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:49.227491   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:49.227502   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:49.280386   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:49.280418   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:49.293755   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:49.293785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:49.366338   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:49.366366   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:49.366381   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:49.444064   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:49.444102   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:51.990077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:52.002185   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:52.002244   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:52.033585   70686 cri.go:89] found id: ""
	I0127 11:45:52.033608   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.033616   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:52.033622   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:52.033671   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:52.063740   70686 cri.go:89] found id: ""
	I0127 11:45:52.063766   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.063776   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:52.063784   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:52.063846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:52.098052   70686 cri.go:89] found id: ""
	I0127 11:45:52.098089   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.098115   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:52.098122   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:52.098186   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:52.130011   70686 cri.go:89] found id: ""
	I0127 11:45:52.130039   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.130048   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:52.130057   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:52.130101   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:52.163864   70686 cri.go:89] found id: ""
	I0127 11:45:52.163887   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.163894   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:52.163899   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:52.163946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:52.195990   70686 cri.go:89] found id: ""
	I0127 11:45:52.196020   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.196029   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:52.196034   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:52.196079   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:52.227747   70686 cri.go:89] found id: ""
	I0127 11:45:52.227780   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.227792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:52.227799   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:52.227860   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:52.262186   70686 cri.go:89] found id: ""
	I0127 11:45:52.262214   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.262224   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:52.262234   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:52.262249   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:52.318567   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:52.318603   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:52.332621   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:52.332646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:52.403429   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:52.403451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:52.403462   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:52.482267   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:52.482309   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.018478   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:55.032583   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:55.032655   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:55.070418   70686 cri.go:89] found id: ""
	I0127 11:45:55.070446   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.070454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:55.070460   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:55.070534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:55.102785   70686 cri.go:89] found id: ""
	I0127 11:45:55.102820   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.102831   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:55.102837   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:55.102893   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:55.140432   70686 cri.go:89] found id: ""
	I0127 11:45:55.140466   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.140477   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:55.140483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:55.140548   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:55.173071   70686 cri.go:89] found id: ""
	I0127 11:45:55.173097   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.173107   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:55.173115   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:55.173175   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:55.207834   70686 cri.go:89] found id: ""
	I0127 11:45:55.207867   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.207878   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:55.207886   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:55.207949   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:55.240758   70686 cri.go:89] found id: ""
	I0127 11:45:55.240786   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.240794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:55.240807   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:55.240852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:55.276038   70686 cri.go:89] found id: ""
	I0127 11:45:55.276067   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.276078   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:55.276085   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:55.276135   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:55.307786   70686 cri.go:89] found id: ""
	I0127 11:45:55.307818   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.307829   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:55.307841   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:55.307855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:55.384874   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:55.384908   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.425141   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:55.425169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:55.479108   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:55.479144   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:55.492988   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:55.493018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:55.557856   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:58.059727   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:58.072633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:58.072713   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:58.107460   70686 cri.go:89] found id: ""
	I0127 11:45:58.107494   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.107505   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:58.107513   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:58.107570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:58.143678   70686 cri.go:89] found id: ""
	I0127 11:45:58.143709   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.143721   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:58.143729   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:58.143794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:58.177914   70686 cri.go:89] found id: ""
	I0127 11:45:58.177942   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.177949   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:58.177957   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:58.178003   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:58.210641   70686 cri.go:89] found id: ""
	I0127 11:45:58.210679   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.210690   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:58.210698   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:58.210759   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:58.242373   70686 cri.go:89] found id: ""
	I0127 11:45:58.242408   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.242420   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:58.242427   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:58.242494   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:58.277921   70686 cri.go:89] found id: ""
	I0127 11:45:58.277954   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.277965   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:58.277973   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:58.278033   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:58.310342   70686 cri.go:89] found id: ""
	I0127 11:45:58.310373   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.310384   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:58.310391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:58.310459   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:58.345616   70686 cri.go:89] found id: ""
	I0127 11:45:58.345649   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.345660   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:58.345671   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:58.345687   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:58.380655   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:58.380680   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:58.433828   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:58.433859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:58.447666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:58.447703   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:58.510668   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:58.510698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:58.510714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:01.087242   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:01.099871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:01.099926   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:01.132252   70686 cri.go:89] found id: ""
	I0127 11:46:01.132285   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.132293   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:01.132298   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:01.132348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:01.163920   70686 cri.go:89] found id: ""
	I0127 11:46:01.163949   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.163960   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:01.163967   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:01.164034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:01.198833   70686 cri.go:89] found id: ""
	I0127 11:46:01.198858   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.198865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:01.198871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:01.198916   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:01.238722   70686 cri.go:89] found id: ""
	I0127 11:46:01.238753   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.238763   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:01.238779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:01.238844   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:01.272868   70686 cri.go:89] found id: ""
	I0127 11:46:01.272892   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.272898   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:01.272903   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:01.272947   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:01.307986   70686 cri.go:89] found id: ""
	I0127 11:46:01.308015   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.308024   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:01.308029   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:01.308082   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:01.341997   70686 cri.go:89] found id: ""
	I0127 11:46:01.342027   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.342039   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:01.342047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:01.342109   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:01.374940   70686 cri.go:89] found id: ""
	I0127 11:46:01.374968   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.374978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:01.374989   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:01.375002   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:01.428465   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:01.428500   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:01.442684   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:01.442708   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:01.512159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.512185   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:01.512198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:01.586215   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:01.586265   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.127745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:04.140798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:04.140873   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:04.175150   70686 cri.go:89] found id: ""
	I0127 11:46:04.175186   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.175197   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:04.175204   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:04.175282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:04.210697   70686 cri.go:89] found id: ""
	I0127 11:46:04.210727   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.210736   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:04.210744   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:04.210800   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:04.240777   70686 cri.go:89] found id: ""
	I0127 11:46:04.240803   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.240811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:04.240821   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:04.240865   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:04.273040   70686 cri.go:89] found id: ""
	I0127 11:46:04.273076   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.273087   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:04.273094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:04.273151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:04.308441   70686 cri.go:89] found id: ""
	I0127 11:46:04.308468   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.308478   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:04.308484   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:04.308546   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:04.346756   70686 cri.go:89] found id: ""
	I0127 11:46:04.346783   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.346793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:04.346802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:04.346870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:04.381718   70686 cri.go:89] found id: ""
	I0127 11:46:04.381747   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.381758   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:04.381766   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:04.381842   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:04.415875   70686 cri.go:89] found id: ""
	I0127 11:46:04.415913   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.415921   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:04.415930   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:04.415942   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:04.499951   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:04.499990   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.539557   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:04.539592   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:04.595977   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:04.596011   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:04.609081   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:04.609107   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:04.678937   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:07.179760   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:07.193186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:07.193259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:07.226455   70686 cri.go:89] found id: ""
	I0127 11:46:07.226487   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.226498   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:07.226507   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:07.226570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:07.259391   70686 cri.go:89] found id: ""
	I0127 11:46:07.259427   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.259439   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:07.259447   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:07.259520   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:07.295281   70686 cri.go:89] found id: ""
	I0127 11:46:07.295314   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.295326   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:07.295334   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:07.295384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:07.330145   70686 cri.go:89] found id: ""
	I0127 11:46:07.330177   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.330186   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:07.330194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:07.330260   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:07.368846   70686 cri.go:89] found id: ""
	I0127 11:46:07.368875   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.368882   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:07.368889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:07.368938   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:07.404802   70686 cri.go:89] found id: ""
	I0127 11:46:07.404832   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.404843   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:07.404851   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:07.404914   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:07.437053   70686 cri.go:89] found id: ""
	I0127 11:46:07.437081   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.437090   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:07.437096   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:07.437142   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:07.474455   70686 cri.go:89] found id: ""
	I0127 11:46:07.474482   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.474490   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:07.474498   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:07.474510   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:07.529193   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:07.529229   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:07.543329   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:07.543365   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:07.623019   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:07.623043   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:07.623057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:07.701237   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:07.701277   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:10.239258   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:10.252360   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:10.252423   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:10.288112   70686 cri.go:89] found id: ""
	I0127 11:46:10.288135   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.288143   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:10.288149   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:10.288195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:10.323260   70686 cri.go:89] found id: ""
	I0127 11:46:10.323288   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.323296   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:10.323302   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:10.323358   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:10.358662   70686 cri.go:89] found id: ""
	I0127 11:46:10.358686   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.358694   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:10.358700   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:10.358744   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:10.397231   70686 cri.go:89] found id: ""
	I0127 11:46:10.397262   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.397273   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:10.397281   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:10.397384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:10.430384   70686 cri.go:89] found id: ""
	I0127 11:46:10.430411   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.430419   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:10.430425   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:10.430490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:10.461361   70686 cri.go:89] found id: ""
	I0127 11:46:10.461387   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.461396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:10.461404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:10.461464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:10.497276   70686 cri.go:89] found id: ""
	I0127 11:46:10.497309   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.497318   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:10.497324   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:10.497389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:10.530718   70686 cri.go:89] found id: ""
	I0127 11:46:10.530751   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.530762   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:10.530772   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:10.530785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:10.578801   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:10.578839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:10.591288   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:10.591312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:10.655021   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:10.655051   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:10.655065   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:10.731115   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:10.731151   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:13.267173   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:13.280623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:13.280688   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:13.314325   70686 cri.go:89] found id: ""
	I0127 11:46:13.314362   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.314372   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:13.314380   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:13.314441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:13.346889   70686 cri.go:89] found id: ""
	I0127 11:46:13.346918   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.346929   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:13.346936   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:13.346989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:13.378900   70686 cri.go:89] found id: ""
	I0127 11:46:13.378929   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.378939   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:13.378945   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:13.379004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:13.412919   70686 cri.go:89] found id: ""
	I0127 11:46:13.412952   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.412963   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:13.412971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:13.413027   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:13.444222   70686 cri.go:89] found id: ""
	I0127 11:46:13.444250   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.444260   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:13.444266   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:13.444317   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:13.474180   70686 cri.go:89] found id: ""
	I0127 11:46:13.474206   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.474212   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:13.474218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:13.474277   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:13.507679   70686 cri.go:89] found id: ""
	I0127 11:46:13.507707   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.507718   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:13.507726   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:13.507785   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:13.540402   70686 cri.go:89] found id: ""
	I0127 11:46:13.540428   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.540436   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:13.540444   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:13.540454   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:13.619310   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:13.619341   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:13.659541   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:13.659568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:13.710958   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:13.710992   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:13.724362   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:13.724387   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:13.799175   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:16.299872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:16.313092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:16.313151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:16.344606   70686 cri.go:89] found id: ""
	I0127 11:46:16.344636   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.344647   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:16.344654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:16.344709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:16.378025   70686 cri.go:89] found id: ""
	I0127 11:46:16.378052   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.378060   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:16.378065   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:16.378112   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:16.409333   70686 cri.go:89] found id: ""
	I0127 11:46:16.409359   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.409366   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:16.409372   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:16.409417   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:16.440176   70686 cri.go:89] found id: ""
	I0127 11:46:16.440199   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.440207   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:16.440218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:16.440303   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:16.474293   70686 cri.go:89] found id: ""
	I0127 11:46:16.474325   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.474333   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:16.474339   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:16.474386   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:16.505778   70686 cri.go:89] found id: ""
	I0127 11:46:16.505801   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.505808   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:16.505814   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:16.505867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:16.540769   70686 cri.go:89] found id: ""
	I0127 11:46:16.540797   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.540807   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:16.540815   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:16.540870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:16.576592   70686 cri.go:89] found id: ""
	I0127 11:46:16.576620   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.576630   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:16.576640   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:16.576652   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:16.653408   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:16.653443   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:16.692433   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:16.692458   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:16.740803   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:16.740837   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:16.753287   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:16.753312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:16.826095   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.327736   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:19.340166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:19.340220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:19.371540   70686 cri.go:89] found id: ""
	I0127 11:46:19.371578   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.371591   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:19.371600   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:19.371673   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:19.404729   70686 cri.go:89] found id: ""
	I0127 11:46:19.404764   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.404774   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:19.404781   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:19.404837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:19.439789   70686 cri.go:89] found id: ""
	I0127 11:46:19.439825   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.439837   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:19.439846   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:19.439906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:19.470570   70686 cri.go:89] found id: ""
	I0127 11:46:19.470600   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.470611   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:19.470619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:19.470681   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:19.501777   70686 cri.go:89] found id: ""
	I0127 11:46:19.501805   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.501816   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:19.501824   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:19.501880   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:19.534181   70686 cri.go:89] found id: ""
	I0127 11:46:19.534210   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.534217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:19.534223   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:19.534284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:19.566593   70686 cri.go:89] found id: ""
	I0127 11:46:19.566620   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.566628   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:19.566633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:19.566693   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:19.599915   70686 cri.go:89] found id: ""
	I0127 11:46:19.599940   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.599951   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:19.599966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:19.599981   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:19.650351   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:19.650385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:19.663542   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:19.663567   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:19.734523   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.734552   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:19.734568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:19.808148   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:19.808182   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:22.345687   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:22.359497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:22.359568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:22.392346   70686 cri.go:89] found id: ""
	I0127 11:46:22.392372   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.392381   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:22.392386   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:22.392443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:22.425056   70686 cri.go:89] found id: ""
	I0127 11:46:22.425081   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.425089   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:22.425093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:22.425146   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:22.460472   70686 cri.go:89] found id: ""
	I0127 11:46:22.460501   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.460512   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:22.460519   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:22.460580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:22.494621   70686 cri.go:89] found id: ""
	I0127 11:46:22.494646   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.494656   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:22.494663   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:22.494724   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:22.531878   70686 cri.go:89] found id: ""
	I0127 11:46:22.531902   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.531909   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:22.531914   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:22.531961   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:22.566924   70686 cri.go:89] found id: ""
	I0127 11:46:22.566946   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.566953   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:22.566960   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:22.567019   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:22.601357   70686 cri.go:89] found id: ""
	I0127 11:46:22.601384   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.601394   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:22.601402   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:22.601467   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:22.634574   70686 cri.go:89] found id: ""
	I0127 11:46:22.634611   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.634620   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:22.634631   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:22.634641   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:22.683998   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:22.684027   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:22.697042   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:22.697068   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:22.758991   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.759018   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:22.759034   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:22.837791   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:22.837824   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.374998   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:25.387470   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:25.387527   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:25.419525   70686 cri.go:89] found id: ""
	I0127 11:46:25.419552   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.419559   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:25.419565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:25.419637   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:25.452027   70686 cri.go:89] found id: ""
	I0127 11:46:25.452051   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.452059   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:25.452064   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:25.452111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:25.482868   70686 cri.go:89] found id: ""
	I0127 11:46:25.482899   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.482909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:25.482916   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:25.482978   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:25.513413   70686 cri.go:89] found id: ""
	I0127 11:46:25.513438   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.513447   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:25.513453   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:25.513497   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:25.544499   70686 cri.go:89] found id: ""
	I0127 11:46:25.544525   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.544534   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:25.544545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:25.544591   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:25.576649   70686 cri.go:89] found id: ""
	I0127 11:46:25.576676   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.576686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:25.576694   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:25.576749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:25.613447   70686 cri.go:89] found id: ""
	I0127 11:46:25.613476   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.613483   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:25.613489   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:25.613547   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:25.645468   70686 cri.go:89] found id: ""
	I0127 11:46:25.645492   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.645503   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:25.645513   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:25.645530   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:25.724060   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:25.724112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.758966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:25.759001   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:25.809187   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:25.809218   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:25.822532   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:25.822563   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:25.889713   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:28.390290   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:28.402720   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:28.402794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:28.433933   70686 cri.go:89] found id: ""
	I0127 11:46:28.433960   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.433971   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:28.433979   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:28.434037   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:28.465830   70686 cri.go:89] found id: ""
	I0127 11:46:28.465864   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.465874   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:28.465881   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:28.465939   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:28.497527   70686 cri.go:89] found id: ""
	I0127 11:46:28.497562   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.497570   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:28.497579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:28.497645   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:28.531270   70686 cri.go:89] found id: ""
	I0127 11:46:28.531299   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.531308   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:28.531316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:28.531371   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:28.563348   70686 cri.go:89] found id: ""
	I0127 11:46:28.563369   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.563376   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:28.563381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:28.563426   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:28.596997   70686 cri.go:89] found id: ""
	I0127 11:46:28.597020   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.597027   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:28.597032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:28.597078   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:28.631710   70686 cri.go:89] found id: ""
	I0127 11:46:28.631744   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.631756   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:28.631763   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:28.631822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:28.691511   70686 cri.go:89] found id: ""
	I0127 11:46:28.691543   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.691554   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:28.691565   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:28.691579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:28.742602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:28.742635   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:28.756184   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:28.756207   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:28.830835   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:28.830857   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:28.830868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:28.905594   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:28.905630   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:31.441466   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:31.453810   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:31.453884   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:31.486385   70686 cri.go:89] found id: ""
	I0127 11:46:31.486419   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.486428   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:31.486433   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:31.486486   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:31.518387   70686 cri.go:89] found id: ""
	I0127 11:46:31.518414   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.518422   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:31.518427   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:31.518487   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:31.553495   70686 cri.go:89] found id: ""
	I0127 11:46:31.553519   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.553527   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:31.553532   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:31.553585   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:31.587152   70686 cri.go:89] found id: ""
	I0127 11:46:31.587178   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.587187   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:31.587194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:31.587249   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:31.617431   70686 cri.go:89] found id: ""
	I0127 11:46:31.617459   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.617468   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:31.617474   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:31.617519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:31.651686   70686 cri.go:89] found id: ""
	I0127 11:46:31.651712   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.651720   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:31.651725   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:31.651771   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:31.684941   70686 cri.go:89] found id: ""
	I0127 11:46:31.684967   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.684977   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:31.684984   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:31.685042   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:31.718413   70686 cri.go:89] found id: ""
	I0127 11:46:31.718440   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.718451   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:31.718461   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:31.718476   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:31.767445   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:31.767470   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:31.780922   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:31.780949   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:31.846438   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:31.846462   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:31.846474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:31.926888   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:31.926923   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.465125   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:34.479852   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:34.479930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:34.511060   70686 cri.go:89] found id: ""
	I0127 11:46:34.511084   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.511093   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:34.511098   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:34.511143   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:34.544234   70686 cri.go:89] found id: ""
	I0127 11:46:34.544263   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.544269   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:34.544275   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:34.544319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:34.578776   70686 cri.go:89] found id: ""
	I0127 11:46:34.578799   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.578809   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:34.578816   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:34.578871   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:34.611130   70686 cri.go:89] found id: ""
	I0127 11:46:34.611154   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.611163   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:34.611168   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:34.611225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:34.643126   70686 cri.go:89] found id: ""
	I0127 11:46:34.643153   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.643163   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:34.643171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:34.643227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:34.678033   70686 cri.go:89] found id: ""
	I0127 11:46:34.678076   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.678087   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:34.678094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:34.678160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:34.712414   70686 cri.go:89] found id: ""
	I0127 11:46:34.712443   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.712454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:34.712461   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:34.712534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:34.745083   70686 cri.go:89] found id: ""
	I0127 11:46:34.745109   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.745116   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:34.745124   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:34.745136   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:34.757666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:34.757694   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:34.823196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:34.823218   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:34.823230   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:34.905878   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:34.905913   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.942463   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:34.942488   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:37.493333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:37.505875   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:37.505935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:37.538445   70686 cri.go:89] found id: ""
	I0127 11:46:37.538470   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.538478   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:37.538484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:37.538537   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:37.569576   70686 cri.go:89] found id: ""
	I0127 11:46:37.569607   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.569618   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:37.569625   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:37.569687   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:37.603340   70686 cri.go:89] found id: ""
	I0127 11:46:37.603366   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.603376   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:37.603383   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:37.603441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:37.637178   70686 cri.go:89] found id: ""
	I0127 11:46:37.637211   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.637221   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:37.637230   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:37.637294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:37.669332   70686 cri.go:89] found id: ""
	I0127 11:46:37.669359   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.669367   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:37.669373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:37.669420   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:37.701983   70686 cri.go:89] found id: ""
	I0127 11:46:37.702012   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.702021   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:37.702028   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:37.702089   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:37.734833   70686 cri.go:89] found id: ""
	I0127 11:46:37.734856   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.734865   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:37.734871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:37.734927   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:37.768113   70686 cri.go:89] found id: ""
	I0127 11:46:37.768141   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.768149   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:37.768157   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:37.768167   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:37.839883   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:37.839917   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:37.876177   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:37.876210   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:37.928640   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:37.928669   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:37.942971   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:37.942995   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:38.012611   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.514324   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:40.526994   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:40.527053   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:40.561170   70686 cri.go:89] found id: ""
	I0127 11:46:40.561192   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.561200   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:40.561205   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:40.561248   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:40.597933   70686 cri.go:89] found id: ""
	I0127 11:46:40.597964   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.597973   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:40.597981   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:40.598049   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:40.633227   70686 cri.go:89] found id: ""
	I0127 11:46:40.633255   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.633263   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:40.633287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:40.633348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:40.667332   70686 cri.go:89] found id: ""
	I0127 11:46:40.667360   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.667368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:40.667373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:40.667434   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:40.702346   70686 cri.go:89] found id: ""
	I0127 11:46:40.702372   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.702383   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:40.702391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:40.702447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:40.733890   70686 cri.go:89] found id: ""
	I0127 11:46:40.733916   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.733924   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:40.733929   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:40.733979   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:40.766986   70686 cri.go:89] found id: ""
	I0127 11:46:40.767005   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.767011   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:40.767016   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:40.767069   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:40.809290   70686 cri.go:89] found id: ""
	I0127 11:46:40.809320   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.809331   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:40.809342   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:40.809363   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:40.863970   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:40.864006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:40.886163   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:40.886188   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:40.951248   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.951277   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:40.951293   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:41.025220   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:41.025251   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.562970   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:43.575475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:43.575540   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:43.614847   70686 cri.go:89] found id: ""
	I0127 11:46:43.614875   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.614885   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:43.614892   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:43.614957   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:43.651178   70686 cri.go:89] found id: ""
	I0127 11:46:43.651208   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.651219   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:43.651227   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:43.651282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:43.683752   70686 cri.go:89] found id: ""
	I0127 11:46:43.683777   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.683783   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:43.683788   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:43.683846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:43.718384   70686 cri.go:89] found id: ""
	I0127 11:46:43.718418   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.718429   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:43.718486   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:43.718557   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:43.751566   70686 cri.go:89] found id: ""
	I0127 11:46:43.751619   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.751631   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:43.751639   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:43.751701   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:43.785338   70686 cri.go:89] found id: ""
	I0127 11:46:43.785370   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.785381   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:43.785390   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:43.785453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:43.825291   70686 cri.go:89] found id: ""
	I0127 11:46:43.825320   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.825330   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:43.825337   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:43.825397   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:43.856396   70686 cri.go:89] found id: ""
	I0127 11:46:43.856422   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.856429   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:43.856437   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:43.856448   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:43.907954   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:43.907991   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:43.920963   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:43.920987   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:43.986527   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:43.986547   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:43.986562   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:44.062764   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:44.062796   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:46.599548   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:46.625909   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:46.625985   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:46.670285   70686 cri.go:89] found id: ""
	I0127 11:46:46.670317   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.670329   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:46.670337   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:46.670408   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:46.703591   70686 cri.go:89] found id: ""
	I0127 11:46:46.703628   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.703636   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:46.703642   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:46.703689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:46.734451   70686 cri.go:89] found id: ""
	I0127 11:46:46.734475   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.734482   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:46.734487   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:46.734539   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:46.768854   70686 cri.go:89] found id: ""
	I0127 11:46:46.768879   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.768886   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:46.768891   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:46.768937   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:46.798912   70686 cri.go:89] found id: ""
	I0127 11:46:46.798937   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.798945   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:46.798951   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:46.799009   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:46.832665   70686 cri.go:89] found id: ""
	I0127 11:46:46.832689   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.832696   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:46.832702   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:46.832751   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:46.863964   70686 cri.go:89] found id: ""
	I0127 11:46:46.863990   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.863998   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:46.864003   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:46.864064   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:46.902558   70686 cri.go:89] found id: ""
	I0127 11:46:46.902595   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.902606   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:46.902617   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:46.902632   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:46.937731   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:46.937754   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:46.986804   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:46.986839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:47.000095   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:47.000142   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:47.064072   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:47.064099   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:47.064118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:49.640691   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:49.653166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:49.653225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:49.687904   70686 cri.go:89] found id: ""
	I0127 11:46:49.687928   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.687938   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:49.687945   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:49.688000   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:49.725500   70686 cri.go:89] found id: ""
	I0127 11:46:49.725528   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.725537   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:49.725549   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:49.725610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:49.757793   70686 cri.go:89] found id: ""
	I0127 11:46:49.757823   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.757834   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:49.757841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:49.757901   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:49.789916   70686 cri.go:89] found id: ""
	I0127 11:46:49.789945   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.789955   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:49.789962   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:49.790020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:49.821431   70686 cri.go:89] found id: ""
	I0127 11:46:49.821461   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.821472   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:49.821479   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:49.821541   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:49.853511   70686 cri.go:89] found id: ""
	I0127 11:46:49.853541   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.853548   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:49.853554   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:49.853605   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:49.887197   70686 cri.go:89] found id: ""
	I0127 11:46:49.887225   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.887232   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:49.887237   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:49.887313   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:49.920423   70686 cri.go:89] found id: ""
	I0127 11:46:49.920454   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.920465   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:49.920476   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:49.920489   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:49.970455   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:49.970487   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:49.985812   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:49.985844   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:50.055494   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:50.055520   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:50.055536   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:50.134706   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:50.134743   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:52.675280   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:52.690464   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:52.690545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:52.722566   70686 cri.go:89] found id: ""
	I0127 11:46:52.722600   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.722611   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:52.722621   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:52.722683   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:52.754684   70686 cri.go:89] found id: ""
	I0127 11:46:52.754710   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.754718   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:52.754723   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:52.754782   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:52.786631   70686 cri.go:89] found id: ""
	I0127 11:46:52.786659   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.786685   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:52.786691   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:52.786745   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:52.817637   70686 cri.go:89] found id: ""
	I0127 11:46:52.817664   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.817672   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:52.817681   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:52.817737   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:52.853402   70686 cri.go:89] found id: ""
	I0127 11:46:52.853428   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.853437   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:52.853443   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:52.853504   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:52.893692   70686 cri.go:89] found id: ""
	I0127 11:46:52.893720   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.893727   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:52.893733   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:52.893780   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.924897   70686 cri.go:89] found id: ""
	I0127 11:46:52.924926   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.924934   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:52.924940   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:52.924988   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:52.955377   70686 cri.go:89] found id: ""
	I0127 11:46:52.955397   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.955404   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:52.955412   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:52.955422   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:53.007489   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:53.007518   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:53.020482   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:53.020508   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:53.088456   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:53.088489   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:53.088503   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:53.161401   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:53.161432   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:55.698676   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:55.711047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:55.711104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:55.741929   70686 cri.go:89] found id: ""
	I0127 11:46:55.741952   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.741960   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:55.741965   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:55.742016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:55.773353   70686 cri.go:89] found id: ""
	I0127 11:46:55.773385   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.773394   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:55.773399   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:55.773453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:55.805262   70686 cri.go:89] found id: ""
	I0127 11:46:55.805293   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.805303   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:55.805309   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:55.805356   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:55.837444   70686 cri.go:89] found id: ""
	I0127 11:46:55.837469   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.837477   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:55.837483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:55.837554   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:55.870483   70686 cri.go:89] found id: ""
	I0127 11:46:55.870519   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.870533   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:55.870541   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:55.870603   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:55.902327   70686 cri.go:89] found id: ""
	I0127 11:46:55.902364   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.902374   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:55.902381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:55.902448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:55.936231   70686 cri.go:89] found id: ""
	I0127 11:46:55.936269   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.936279   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:55.936287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:55.936369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:55.968008   70686 cri.go:89] found id: ""
	I0127 11:46:55.968032   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.968039   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:55.968047   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:55.968057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:56.018736   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:56.018766   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:56.031397   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:56.031423   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:56.097044   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:56.097066   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:56.097079   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:56.171821   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:56.171855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:58.715327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:58.728027   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:58.728087   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:58.758672   70686 cri.go:89] found id: ""
	I0127 11:46:58.758700   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.758712   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:58.758719   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:58.758786   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:58.790220   70686 cri.go:89] found id: ""
	I0127 11:46:58.790245   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.790255   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:58.790263   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:58.790327   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:58.822188   70686 cri.go:89] found id: ""
	I0127 11:46:58.822214   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.822221   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:58.822227   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:58.822273   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:58.863053   70686 cri.go:89] found id: ""
	I0127 11:46:58.863089   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.863096   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:58.863102   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:58.863156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:58.899216   70686 cri.go:89] found id: ""
	I0127 11:46:58.899259   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.899271   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:58.899279   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:58.899338   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:58.935392   70686 cri.go:89] found id: ""
	I0127 11:46:58.935425   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.935435   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:58.935441   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:58.935503   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:58.972729   70686 cri.go:89] found id: ""
	I0127 11:46:58.972759   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.972767   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:58.972772   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:58.972823   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:59.008660   70686 cri.go:89] found id: ""
	I0127 11:46:59.008689   70686 logs.go:282] 0 containers: []
	W0127 11:46:59.008698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:59.008707   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:59.008718   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:59.063158   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:59.063199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:59.075767   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:59.075799   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:59.142382   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:59.142406   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:59.142421   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:59.223068   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:59.223100   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:01.760319   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:01.774202   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:01.774282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:01.817355   70686 cri.go:89] found id: ""
	I0127 11:47:01.817389   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.817401   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:01.817408   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:01.817469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:01.862960   70686 cri.go:89] found id: ""
	I0127 11:47:01.862985   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.862996   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:01.863003   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:01.863065   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:01.899900   70686 cri.go:89] found id: ""
	I0127 11:47:01.899931   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.899942   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:01.899949   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:01.900014   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:01.934687   70686 cri.go:89] found id: ""
	I0127 11:47:01.934723   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.934735   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:01.934744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:01.934809   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:01.969463   70686 cri.go:89] found id: ""
	I0127 11:47:01.969490   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.969501   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:01.969507   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:01.969578   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:02.000732   70686 cri.go:89] found id: ""
	I0127 11:47:02.000762   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.000772   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:02.000779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:02.000837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:02.035717   70686 cri.go:89] found id: ""
	I0127 11:47:02.035740   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.035748   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:02.035755   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:02.035799   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:02.073457   70686 cri.go:89] found id: ""
	I0127 11:47:02.073488   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.073498   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:02.073506   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:02.073519   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:02.142775   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:02.142800   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:02.142819   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:02.224541   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:02.224579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:02.260807   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:02.260840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:02.315983   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:02.316017   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:04.830232   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:04.844321   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:04.844380   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:04.880946   70686 cri.go:89] found id: ""
	I0127 11:47:04.880977   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.880986   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:04.880991   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:04.881066   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:04.913741   70686 cri.go:89] found id: ""
	I0127 11:47:04.913766   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.913773   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:04.913778   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:04.913831   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:04.948526   70686 cri.go:89] found id: ""
	I0127 11:47:04.948558   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.948565   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:04.948571   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:04.948621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:04.982076   70686 cri.go:89] found id: ""
	I0127 11:47:04.982102   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.982112   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:04.982119   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:04.982181   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:05.014982   70686 cri.go:89] found id: ""
	I0127 11:47:05.015007   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.015018   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:05.015025   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:05.015111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:05.048025   70686 cri.go:89] found id: ""
	I0127 11:47:05.048054   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.048065   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:05.048073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:05.048132   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:05.078464   70686 cri.go:89] found id: ""
	I0127 11:47:05.078492   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.078502   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:05.078509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:05.078584   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:05.109525   70686 cri.go:89] found id: ""
	I0127 11:47:05.109560   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.109571   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:05.109581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:05.109595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:05.157576   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:05.157608   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:05.170049   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:05.170087   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:05.239411   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:05.239433   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:05.239447   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:05.318700   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:05.318742   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:07.856193   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:07.870239   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:07.870310   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:07.910104   70686 cri.go:89] found id: ""
	I0127 11:47:07.910130   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.910138   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:07.910144   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:07.910189   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:07.945048   70686 cri.go:89] found id: ""
	I0127 11:47:07.945074   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.945084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:07.945092   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:07.945166   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:07.976080   70686 cri.go:89] found id: ""
	I0127 11:47:07.976111   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.976122   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:07.976128   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:07.976200   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:08.013354   70686 cri.go:89] found id: ""
	I0127 11:47:08.013388   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.013400   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:08.013407   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:08.013465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:08.045589   70686 cri.go:89] found id: ""
	I0127 11:47:08.045618   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.045626   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:08.045631   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:08.045689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:08.079539   70686 cri.go:89] found id: ""
	I0127 11:47:08.079565   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.079573   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:08.079579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:08.079650   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:08.110343   70686 cri.go:89] found id: ""
	I0127 11:47:08.110375   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.110383   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:08.110388   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:08.110447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:08.140367   70686 cri.go:89] found id: ""
	I0127 11:47:08.140398   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.140411   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:08.140422   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:08.140436   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:08.205212   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:08.205240   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:08.205255   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:08.277925   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:08.277956   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:08.314583   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:08.314609   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:08.362779   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:08.362809   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:10.876637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:10.890367   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:10.890448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:10.925658   70686 cri.go:89] found id: ""
	I0127 11:47:10.925688   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.925699   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:10.925707   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:10.925763   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:10.957444   70686 cri.go:89] found id: ""
	I0127 11:47:10.957478   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.957490   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:10.957498   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:10.957561   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:10.988373   70686 cri.go:89] found id: ""
	I0127 11:47:10.988401   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.988412   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:10.988419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:10.988483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:11.019641   70686 cri.go:89] found id: ""
	I0127 11:47:11.019672   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.019683   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:11.019690   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:11.019747   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:11.051614   70686 cri.go:89] found id: ""
	I0127 11:47:11.051643   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.051654   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:11.051661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:11.051709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:11.083356   70686 cri.go:89] found id: ""
	I0127 11:47:11.083386   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.083396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:11.083404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:11.083464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:11.115324   70686 cri.go:89] found id: ""
	I0127 11:47:11.115359   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.115370   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:11.115378   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:11.115451   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:11.150953   70686 cri.go:89] found id: ""
	I0127 11:47:11.150983   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.150994   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:11.151005   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:11.151018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:11.199824   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:11.199855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:11.212841   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:11.212906   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:11.278680   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:11.278707   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:11.278726   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:11.356679   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:11.356719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:13.900662   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:13.913787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:13.913849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:13.947893   70686 cri.go:89] found id: ""
	I0127 11:47:13.947922   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.947934   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:13.947943   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:13.948001   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:13.983161   70686 cri.go:89] found id: ""
	I0127 11:47:13.983190   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.983201   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:13.983209   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:13.983264   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:14.022256   70686 cri.go:89] found id: ""
	I0127 11:47:14.022284   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.022295   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:14.022303   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:14.022354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:14.056796   70686 cri.go:89] found id: ""
	I0127 11:47:14.056830   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.056841   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:14.056848   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:14.056907   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:14.094914   70686 cri.go:89] found id: ""
	I0127 11:47:14.094941   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.094948   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:14.094954   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:14.095011   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:14.133436   70686 cri.go:89] found id: ""
	I0127 11:47:14.133463   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.133471   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:14.133477   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:14.133542   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:14.169031   70686 cri.go:89] found id: ""
	I0127 11:47:14.169062   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.169072   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:14.169078   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:14.169125   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:14.212411   70686 cri.go:89] found id: ""
	I0127 11:47:14.212435   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.212443   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:14.212452   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:14.212463   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:14.262867   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:14.262898   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:14.275105   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:14.275131   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:14.341159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:14.341190   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:14.341208   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:14.415317   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:14.415367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:16.953543   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:16.966233   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:16.966320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:17.006909   70686 cri.go:89] found id: ""
	I0127 11:47:17.006936   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.006946   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:17.006953   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:17.007008   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:17.041632   70686 cri.go:89] found id: ""
	I0127 11:47:17.041659   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.041669   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:17.041677   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:17.041731   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:17.076772   70686 cri.go:89] found id: ""
	I0127 11:47:17.076801   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.076811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:17.076818   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:17.076870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:17.112391   70686 cri.go:89] found id: ""
	I0127 11:47:17.112422   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.112433   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:17.112440   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:17.112573   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:17.148197   70686 cri.go:89] found id: ""
	I0127 11:47:17.148229   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.148247   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:17.148255   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:17.148320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:17.186840   70686 cri.go:89] found id: ""
	I0127 11:47:17.186871   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.186882   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:17.186895   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:17.186953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:17.219412   70686 cri.go:89] found id: ""
	I0127 11:47:17.219443   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.219454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:17.219463   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:17.219534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:17.256447   70686 cri.go:89] found id: ""
	I0127 11:47:17.256478   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.256488   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:17.256499   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:17.256512   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.293919   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:17.293955   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:17.342997   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:17.343028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:17.356650   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:17.356679   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:17.425809   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:17.425838   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:17.425852   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.017327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:20.034172   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:20.034239   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:20.071873   70686 cri.go:89] found id: ""
	I0127 11:47:20.071895   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.071903   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:20.071908   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:20.071955   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:20.106387   70686 cri.go:89] found id: ""
	I0127 11:47:20.106410   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.106417   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:20.106422   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:20.106481   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:20.141095   70686 cri.go:89] found id: ""
	I0127 11:47:20.141130   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.141138   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:20.141144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:20.141194   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:20.183275   70686 cri.go:89] found id: ""
	I0127 11:47:20.183302   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.183310   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:20.183316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:20.183373   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:20.217954   70686 cri.go:89] found id: ""
	I0127 11:47:20.217981   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.217991   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:20.217999   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:20.218061   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:20.262572   70686 cri.go:89] found id: ""
	I0127 11:47:20.262604   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.262616   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:20.262623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:20.262677   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:20.297951   70686 cri.go:89] found id: ""
	I0127 11:47:20.297982   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.297993   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:20.298000   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:20.298088   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:20.331854   70686 cri.go:89] found id: ""
	I0127 11:47:20.331891   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.331901   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:20.331913   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:20.331930   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:20.387238   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:20.387274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:20.409789   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:20.409823   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:20.487425   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:20.487451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:20.487464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.563923   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:20.563959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:23.101745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:23.115010   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:23.115068   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:23.153195   70686 cri.go:89] found id: ""
	I0127 11:47:23.153223   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.153236   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:23.153244   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:23.153311   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:23.187393   70686 cri.go:89] found id: ""
	I0127 11:47:23.187420   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.187431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:23.187437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:23.187499   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:23.220850   70686 cri.go:89] found id: ""
	I0127 11:47:23.220879   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.220888   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:23.220896   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:23.220953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:23.256597   70686 cri.go:89] found id: ""
	I0127 11:47:23.256626   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.256636   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:23.256644   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:23.256692   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:23.296324   70686 cri.go:89] found id: ""
	I0127 11:47:23.296356   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.296366   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:23.296373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:23.296436   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:23.335645   70686 cri.go:89] found id: ""
	I0127 11:47:23.335672   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.335681   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:23.335687   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:23.335733   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:23.366972   70686 cri.go:89] found id: ""
	I0127 11:47:23.366995   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.367003   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:23.367008   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:23.367062   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:23.405377   70686 cri.go:89] found id: ""
	I0127 11:47:23.405404   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.405412   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:23.405420   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:23.405433   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:23.473871   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:23.473898   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:23.473918   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:23.548827   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:23.548868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:23.584272   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:23.584302   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:23.645470   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:23.645517   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:26.161139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:26.175269   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:26.175344   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:26.213990   70686 cri.go:89] found id: ""
	I0127 11:47:26.214019   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.214030   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:26.214038   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:26.214099   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:26.250643   70686 cri.go:89] found id: ""
	I0127 11:47:26.250672   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.250680   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:26.250685   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:26.250749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:26.289305   70686 cri.go:89] found id: ""
	I0127 11:47:26.289327   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.289336   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:26.289343   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:26.289400   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:26.327511   70686 cri.go:89] found id: ""
	I0127 11:47:26.327546   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.327557   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:26.327564   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:26.327629   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:26.363961   70686 cri.go:89] found id: ""
	I0127 11:47:26.363996   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.364011   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:26.364019   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:26.364076   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:26.403759   70686 cri.go:89] found id: ""
	I0127 11:47:26.403782   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.403793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:26.403801   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:26.403862   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:26.443391   70686 cri.go:89] found id: ""
	I0127 11:47:26.443419   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.443429   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:26.443436   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:26.443496   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:26.486086   70686 cri.go:89] found id: ""
	I0127 11:47:26.486189   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.486219   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:26.486255   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:26.486290   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:26.537761   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:26.537789   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:26.624695   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:26.624728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:26.644616   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:26.644646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:26.732815   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:26.732835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:26.732846   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:29.315744   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:29.331345   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:29.331421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:29.366233   70686 cri.go:89] found id: ""
	I0127 11:47:29.366264   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.366276   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:29.366283   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:29.366355   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:29.402282   70686 cri.go:89] found id: ""
	I0127 11:47:29.402310   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.402320   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:29.402327   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:29.402389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:29.438381   70686 cri.go:89] found id: ""
	I0127 11:47:29.438409   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.438420   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:29.438429   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:29.438483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:29.473386   70686 cri.go:89] found id: ""
	I0127 11:47:29.473408   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.473414   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:29.473419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:29.473465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:29.506930   70686 cri.go:89] found id: ""
	I0127 11:47:29.506954   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.506961   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:29.506966   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:29.507025   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:29.542763   70686 cri.go:89] found id: ""
	I0127 11:47:29.542786   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.542794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:29.542802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:29.542861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:29.578067   70686 cri.go:89] found id: ""
	I0127 11:47:29.578097   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.578108   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:29.578117   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:29.578176   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:29.613659   70686 cri.go:89] found id: ""
	I0127 11:47:29.613687   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.613698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:29.613709   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:29.613728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:29.659409   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:29.659446   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:29.718837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:29.718870   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:29.735558   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:29.735583   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:29.839999   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:29.840025   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:29.840043   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:32.447780   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:32.465728   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:32.465812   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:32.527859   70686 cri.go:89] found id: ""
	I0127 11:47:32.527947   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.527972   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:32.527990   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:32.528104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:32.576073   70686 cri.go:89] found id: ""
	I0127 11:47:32.576171   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.576187   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:32.576195   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:32.576290   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:32.623076   70686 cri.go:89] found id: ""
	I0127 11:47:32.623118   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.623130   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:32.623137   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:32.623225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:32.691228   70686 cri.go:89] found id: ""
	I0127 11:47:32.691318   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.691343   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:32.691362   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:32.691477   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:32.745780   70686 cri.go:89] found id: ""
	I0127 11:47:32.745811   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.745823   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:32.745831   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:32.745906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:32.789692   70686 cri.go:89] found id: ""
	I0127 11:47:32.789731   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.789741   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:32.789751   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:32.789817   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:32.826257   70686 cri.go:89] found id: ""
	I0127 11:47:32.826288   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.826299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:32.826306   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:32.826368   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:32.868284   70686 cri.go:89] found id: ""
	I0127 11:47:32.868309   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.868320   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:32.868332   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:32.868354   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:32.925073   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:32.925103   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:32.941771   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:32.941804   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:33.030670   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:33.030695   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:33.030706   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:33.113430   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:33.113464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:35.663439   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:35.680531   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:35.680611   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:35.722549   70686 cri.go:89] found id: ""
	I0127 11:47:35.722571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.722581   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:35.722589   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:35.722634   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:35.788057   70686 cri.go:89] found id: ""
	I0127 11:47:35.788078   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.788084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:35.788090   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:35.788127   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:35.833279   70686 cri.go:89] found id: ""
	I0127 11:47:35.833300   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.833308   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:35.833314   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:35.833357   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:35.874544   70686 cri.go:89] found id: ""
	I0127 11:47:35.874571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.874582   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:35.874589   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:35.874654   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:35.915199   70686 cri.go:89] found id: ""
	I0127 11:47:35.915230   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.915242   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:35.915249   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:35.915314   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:35.952137   70686 cri.go:89] found id: ""
	I0127 11:47:35.952165   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.952175   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:35.952183   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:35.952247   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:35.995842   70686 cri.go:89] found id: ""
	I0127 11:47:35.995870   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.995882   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:35.995889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:35.995946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:36.045603   70686 cri.go:89] found id: ""
	I0127 11:47:36.045629   70686 logs.go:282] 0 containers: []
	W0127 11:47:36.045639   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:36.045647   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:36.045661   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:36.122919   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:36.122952   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:36.141794   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:36.141827   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:36.246196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:36.246229   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:36.246253   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:36.363333   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:36.363378   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.920333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:38.937466   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:38.937549   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:38.982630   70686 cri.go:89] found id: ""
	I0127 11:47:38.982660   70686 logs.go:282] 0 containers: []
	W0127 11:47:38.982672   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:38.982680   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:38.982741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:39.027004   70686 cri.go:89] found id: ""
	I0127 11:47:39.027034   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.027045   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:39.027052   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:39.027114   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:39.068819   70686 cri.go:89] found id: ""
	I0127 11:47:39.068841   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.068849   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:39.068854   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:39.068900   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:39.105724   70686 cri.go:89] found id: ""
	I0127 11:47:39.105758   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.105770   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:39.105779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:39.105849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:39.156156   70686 cri.go:89] found id: ""
	I0127 11:47:39.156183   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.156193   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:39.156200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:39.156257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:39.193966   70686 cri.go:89] found id: ""
	I0127 11:47:39.194002   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.194012   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:39.194021   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:39.194085   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:39.231373   70686 cri.go:89] found id: ""
	I0127 11:47:39.231398   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.231407   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:39.231415   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:39.231479   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:39.278257   70686 cri.go:89] found id: ""
	I0127 11:47:39.278288   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.278299   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:39.278309   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:39.278324   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:39.356076   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:39.356128   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:39.371224   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:39.371259   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:39.446307   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:39.446334   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:39.446350   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:39.543997   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:39.544032   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:42.081513   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.095014   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:42.095074   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:42.130635   70686 cri.go:89] found id: ""
	I0127 11:47:42.130660   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.130670   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:42.130677   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:42.130741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:42.169363   70686 cri.go:89] found id: ""
	I0127 11:47:42.169394   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.169405   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:42.169415   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:42.169475   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:42.213803   70686 cri.go:89] found id: ""
	I0127 11:47:42.213831   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.213839   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:42.213849   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:42.213911   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:42.249475   70686 cri.go:89] found id: ""
	I0127 11:47:42.249505   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.249516   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:42.249524   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:42.249719   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:42.297727   70686 cri.go:89] found id: ""
	I0127 11:47:42.297753   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.297765   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:42.297770   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:42.297822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:42.340478   70686 cri.go:89] found id: ""
	I0127 11:47:42.340503   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.340513   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:42.340520   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:42.340580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:42.372922   70686 cri.go:89] found id: ""
	I0127 11:47:42.372952   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.372963   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:42.372971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:42.373029   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:42.407938   70686 cri.go:89] found id: ""
	I0127 11:47:42.407967   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.407978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:42.407989   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:42.408005   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:42.484491   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:42.484530   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:42.484553   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:42.579113   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:42.579152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:42.624076   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:42.624105   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:42.679902   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:42.679934   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.194468   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:45.207509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:45.207572   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:45.239999   70686 cri.go:89] found id: ""
	I0127 11:47:45.240028   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.240039   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:45.240046   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:45.240098   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:45.273395   70686 cri.go:89] found id: ""
	I0127 11:47:45.273422   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.273431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:45.273437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:45.273495   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:45.311168   70686 cri.go:89] found id: ""
	I0127 11:47:45.311202   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.311212   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:45.311220   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:45.311284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:45.349465   70686 cri.go:89] found id: ""
	I0127 11:47:45.349491   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.349508   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:45.349513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:45.349568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:45.385823   70686 cri.go:89] found id: ""
	I0127 11:47:45.385848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.385856   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:45.385862   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:45.385919   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:45.426563   70686 cri.go:89] found id: ""
	I0127 11:47:45.426591   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.426603   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:45.426610   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:45.426669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:45.467818   70686 cri.go:89] found id: ""
	I0127 11:47:45.467848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.467856   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:45.467861   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:45.467913   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:45.505509   70686 cri.go:89] found id: ""
	I0127 11:47:45.505551   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.505570   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:45.505581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:45.505595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:45.562102   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:45.562134   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.576502   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:45.576547   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:45.656107   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:45.656179   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:45.656200   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:45.740259   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:45.740307   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:48.288077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:48.305506   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:48.305575   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:48.341384   70686 cri.go:89] found id: ""
	I0127 11:47:48.341413   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.341424   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:48.341431   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:48.341490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:48.385225   70686 cri.go:89] found id: ""
	I0127 11:47:48.385256   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.385266   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:48.385273   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:48.385331   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:48.432004   70686 cri.go:89] found id: ""
	I0127 11:47:48.432026   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.432034   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:48.432039   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:48.432096   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:48.467009   70686 cri.go:89] found id: ""
	I0127 11:47:48.467037   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.467047   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:48.467054   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:48.467111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:48.503820   70686 cri.go:89] found id: ""
	I0127 11:47:48.503847   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.503858   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:48.503864   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:48.503909   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:48.538884   70686 cri.go:89] found id: ""
	I0127 11:47:48.538908   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.538915   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:48.538924   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:48.538983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:48.572744   70686 cri.go:89] found id: ""
	I0127 11:47:48.572773   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.572783   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:48.572791   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:48.572853   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:48.610043   70686 cri.go:89] found id: ""
	I0127 11:47:48.610076   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.610086   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:48.610108   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:48.610123   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:48.683427   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:48.683468   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:48.698950   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:48.698984   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:48.771789   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:48.771819   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:48.771833   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:48.852605   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:48.852642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:51.390888   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:51.403787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:51.403867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:51.438712   70686 cri.go:89] found id: ""
	I0127 11:47:51.438739   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.438746   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:51.438752   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:51.438808   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:51.476783   70686 cri.go:89] found id: ""
	I0127 11:47:51.476811   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.476821   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:51.476829   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:51.476887   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:51.509461   70686 cri.go:89] found id: ""
	I0127 11:47:51.509505   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.509522   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:51.509534   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:51.509592   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:51.545890   70686 cri.go:89] found id: ""
	I0127 11:47:51.545918   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.545936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:51.545943   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:51.546004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:51.582831   70686 cri.go:89] found id: ""
	I0127 11:47:51.582859   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.582868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:51.582876   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:51.582935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:51.618841   70686 cri.go:89] found id: ""
	I0127 11:47:51.618866   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.618874   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:51.618880   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:51.618934   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:51.654004   70686 cri.go:89] found id: ""
	I0127 11:47:51.654037   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.654048   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:51.654055   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:51.654119   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:51.693492   70686 cri.go:89] found id: ""
	I0127 11:47:51.693525   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.693535   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:51.693547   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:51.693561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:51.742871   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:51.742901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:51.756625   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:51.756648   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:51.818231   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:51.818258   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:51.818274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:51.897522   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:51.897556   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.435357   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:54.447575   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:54.447662   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:54.481516   70686 cri.go:89] found id: ""
	I0127 11:47:54.481546   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.481557   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:54.481565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:54.481628   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:54.513468   70686 cri.go:89] found id: ""
	I0127 11:47:54.513494   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.513503   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:54.513510   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:54.513564   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:54.546743   70686 cri.go:89] found id: ""
	I0127 11:47:54.546768   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.546776   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:54.546781   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:54.546837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:54.577457   70686 cri.go:89] found id: ""
	I0127 11:47:54.577495   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.577525   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:54.577533   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:54.577604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:54.607337   70686 cri.go:89] found id: ""
	I0127 11:47:54.607366   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.607375   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:54.607381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:54.607427   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:54.651259   70686 cri.go:89] found id: ""
	I0127 11:47:54.651290   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.651301   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:54.651308   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:54.651369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:54.688579   70686 cri.go:89] found id: ""
	I0127 11:47:54.688604   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.688613   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:54.688619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:54.688678   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:54.725278   70686 cri.go:89] found id: ""
	I0127 11:47:54.725322   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.725341   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:54.725353   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:54.725367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:54.791430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:54.791452   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:54.791465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:54.868163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:54.868191   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.905354   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:54.905385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:54.957412   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:54.957444   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:57.471717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:57.484472   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:57.484545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:57.515302   70686 cri.go:89] found id: ""
	I0127 11:47:57.515334   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.515345   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:57.515353   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:57.515412   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:57.548214   70686 cri.go:89] found id: ""
	I0127 11:47:57.548239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.548248   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:57.548255   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:57.548316   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:57.581598   70686 cri.go:89] found id: ""
	I0127 11:47:57.581624   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.581632   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:57.581637   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:57.581682   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:57.617610   70686 cri.go:89] found id: ""
	I0127 11:47:57.617642   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.617654   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:57.617661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:57.617726   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:57.650213   70686 cri.go:89] found id: ""
	I0127 11:47:57.650239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.650246   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:57.650252   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:57.650319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:57.688111   70686 cri.go:89] found id: ""
	I0127 11:47:57.688132   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.688142   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:57.688150   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:57.688197   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:57.720752   70686 cri.go:89] found id: ""
	I0127 11:47:57.720782   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.720792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:57.720798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:57.720845   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:57.751896   70686 cri.go:89] found id: ""
	I0127 11:47:57.751925   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.751936   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:57.751946   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:57.751959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:57.802765   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:57.802797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:57.815299   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:57.815323   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:57.878584   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:57.878612   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:57.878627   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.954926   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:57.954961   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.492831   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:00.505398   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:00.505458   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:00.541546   70686 cri.go:89] found id: ""
	I0127 11:48:00.541572   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.541583   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:00.541590   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:00.541658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:00.574543   70686 cri.go:89] found id: ""
	I0127 11:48:00.574575   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.574585   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:00.574596   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:00.574658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:00.607826   70686 cri.go:89] found id: ""
	I0127 11:48:00.607855   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.607865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:00.607872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:00.607931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:00.642893   70686 cri.go:89] found id: ""
	I0127 11:48:00.642925   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.642936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:00.642944   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:00.642997   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:00.675525   70686 cri.go:89] found id: ""
	I0127 11:48:00.675549   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.675557   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:00.675563   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:00.675642   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:00.708878   70686 cri.go:89] found id: ""
	I0127 11:48:00.708913   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.708921   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:00.708926   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:00.708971   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:00.740471   70686 cri.go:89] found id: ""
	I0127 11:48:00.740505   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.740512   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:00.740518   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:00.740568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:00.776050   70686 cri.go:89] found id: ""
	I0127 11:48:00.776078   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.776088   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:00.776099   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:00.776112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:00.789429   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:00.789465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:00.855134   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:00.855159   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:00.855176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:00.932863   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:00.932910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.969770   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:00.969797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.521596   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:03.536040   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:03.536171   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:03.571013   70686 cri.go:89] found id: ""
	I0127 11:48:03.571046   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.571057   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:03.571065   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:03.571128   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:03.605846   70686 cri.go:89] found id: ""
	I0127 11:48:03.605871   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.605879   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:03.605885   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:03.605931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:03.641481   70686 cri.go:89] found id: ""
	I0127 11:48:03.641515   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.641524   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:03.641529   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:03.641595   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:03.676290   70686 cri.go:89] found id: ""
	I0127 11:48:03.676316   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.676326   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:03.676333   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:03.676395   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:03.713213   70686 cri.go:89] found id: ""
	I0127 11:48:03.713235   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.713243   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:03.713248   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:03.713337   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:03.746114   70686 cri.go:89] found id: ""
	I0127 11:48:03.746141   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.746151   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:03.746158   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:03.746217   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:03.780250   70686 cri.go:89] found id: ""
	I0127 11:48:03.780289   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.780299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:03.780307   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:03.780354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:03.817856   70686 cri.go:89] found id: ""
	I0127 11:48:03.817885   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.817896   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:03.817907   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:03.817921   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:03.898728   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:03.898779   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:03.935189   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:03.935222   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.990903   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:03.990946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:04.004559   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:04.004584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:04.078588   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:06.578765   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:06.592073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:06.592134   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:06.624430   70686 cri.go:89] found id: ""
	I0127 11:48:06.624465   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.624476   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:06.624484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:06.624555   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:06.677207   70686 cri.go:89] found id: ""
	I0127 11:48:06.677244   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.677257   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:06.677264   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:06.677346   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:06.718809   70686 cri.go:89] found id: ""
	I0127 11:48:06.718833   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.718840   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:06.718845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:06.718890   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:06.754041   70686 cri.go:89] found id: ""
	I0127 11:48:06.754076   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.754089   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:06.754100   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:06.754160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:06.785748   70686 cri.go:89] found id: ""
	I0127 11:48:06.785776   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.785788   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:06.785795   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:06.785854   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:06.819849   70686 cri.go:89] found id: ""
	I0127 11:48:06.819872   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.819879   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:06.819884   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:06.819930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:06.853347   70686 cri.go:89] found id: ""
	I0127 11:48:06.853372   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.853381   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:06.853387   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:06.853438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:06.885714   70686 cri.go:89] found id: ""
	I0127 11:48:06.885740   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.885747   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:06.885755   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:06.885765   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:06.921805   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:06.921832   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:06.974607   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:06.974638   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:06.987566   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:06.987625   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:07.056872   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:07.056892   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:07.056905   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:09.644164   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:09.657446   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:09.657519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:09.696908   70686 cri.go:89] found id: ""
	I0127 11:48:09.696940   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.696950   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:09.696957   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:09.697016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:09.729636   70686 cri.go:89] found id: ""
	I0127 11:48:09.729665   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.729675   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:09.729682   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:09.729742   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:09.769699   70686 cri.go:89] found id: ""
	I0127 11:48:09.769726   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.769734   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:09.769740   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:09.769791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:09.801315   70686 cri.go:89] found id: ""
	I0127 11:48:09.801360   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.801368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:09.801374   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:09.801432   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:09.831170   70686 cri.go:89] found id: ""
	I0127 11:48:09.831212   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.831221   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:09.831226   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:09.831294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:09.862163   70686 cri.go:89] found id: ""
	I0127 11:48:09.862188   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.862194   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:09.862200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:09.862262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:09.893097   70686 cri.go:89] found id: ""
	I0127 11:48:09.893125   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.893136   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:09.893144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:09.893201   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:09.924215   70686 cri.go:89] found id: ""
	I0127 11:48:09.924247   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.924259   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:09.924269   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:09.924286   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:09.990827   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:09.990849   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:09.990859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:10.063335   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:10.063366   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:10.099158   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:10.099199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:10.150789   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:10.150821   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:12.664524   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:12.677711   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:12.677791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:12.710353   70686 cri.go:89] found id: ""
	I0127 11:48:12.710377   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.710384   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:12.710389   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:12.710443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:12.743545   70686 cri.go:89] found id: ""
	I0127 11:48:12.743572   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.743579   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:12.743584   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:12.743646   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:12.775386   70686 cri.go:89] found id: ""
	I0127 11:48:12.775413   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.775423   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:12.775430   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:12.775488   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:12.808803   70686 cri.go:89] found id: ""
	I0127 11:48:12.808828   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.808835   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:12.808841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:12.808898   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:12.842531   70686 cri.go:89] found id: ""
	I0127 11:48:12.842554   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.842561   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:12.842566   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:12.842610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:12.875470   70686 cri.go:89] found id: ""
	I0127 11:48:12.875501   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.875512   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:12.875522   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:12.875579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:12.908768   70686 cri.go:89] found id: ""
	I0127 11:48:12.908790   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.908797   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:12.908802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:12.908848   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:12.943312   70686 cri.go:89] found id: ""
	I0127 11:48:12.943340   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.943348   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:12.943356   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:12.943368   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:12.995939   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:12.995971   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:13.009006   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:13.009028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:13.097589   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:13.097607   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:13.097618   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:13.180494   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:13.180526   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:15.719725   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:15.733707   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:15.733770   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:15.771051   70686 cri.go:89] found id: ""
	I0127 11:48:15.771076   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.771086   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:15.771094   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:15.771156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:15.803893   70686 cri.go:89] found id: ""
	I0127 11:48:15.803926   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.803938   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:15.803945   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:15.803995   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:15.840882   70686 cri.go:89] found id: ""
	I0127 11:48:15.840915   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.840927   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:15.840935   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:15.840993   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:15.879101   70686 cri.go:89] found id: ""
	I0127 11:48:15.879132   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.879144   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:15.879165   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:15.879227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:15.910272   70686 cri.go:89] found id: ""
	I0127 11:48:15.910306   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.910317   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:15.910325   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:15.910385   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:15.942060   70686 cri.go:89] found id: ""
	I0127 11:48:15.942085   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.942093   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:15.942099   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:15.942160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:15.975105   70686 cri.go:89] found id: ""
	I0127 11:48:15.975136   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.975147   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:15.975155   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:15.975219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:16.009270   70686 cri.go:89] found id: ""
	I0127 11:48:16.009302   70686 logs.go:282] 0 containers: []
	W0127 11:48:16.009313   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:16.009323   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:16.009337   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:16.059868   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:16.059901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:16.074089   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:16.074118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:16.150389   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:16.150435   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:16.150450   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:16.226031   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:16.226070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:18.766131   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:18.780688   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:18.780758   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:18.827413   70686 cri.go:89] found id: ""
	I0127 11:48:18.827443   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.827454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:18.827462   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:18.827528   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:18.890142   70686 cri.go:89] found id: ""
	I0127 11:48:18.890169   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.890179   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:18.890187   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:18.890252   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:18.921896   70686 cri.go:89] found id: ""
	I0127 11:48:18.921925   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.921933   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:18.921938   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:18.921989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:18.956705   70686 cri.go:89] found id: ""
	I0127 11:48:18.956728   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.956736   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:18.956744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:18.956813   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:18.989832   70686 cri.go:89] found id: ""
	I0127 11:48:18.989858   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.989868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:18.989874   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:18.989929   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:19.026132   70686 cri.go:89] found id: ""
	I0127 11:48:19.026159   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.026166   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:19.026173   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:19.026219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:19.059138   70686 cri.go:89] found id: ""
	I0127 11:48:19.059162   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.059170   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:19.059175   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:19.059220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:19.092018   70686 cri.go:89] found id: ""
	I0127 11:48:19.092048   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.092058   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:19.092069   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:19.092085   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:19.167121   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:19.167152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:19.205334   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:19.205364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:19.254602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:19.254639   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:19.268979   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:19.269006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:19.338679   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:21.839791   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:21.852667   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:21.852727   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:21.886171   70686 cri.go:89] found id: ""
	I0127 11:48:21.886197   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.886205   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:21.886210   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:21.886257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:21.921652   70686 cri.go:89] found id: ""
	I0127 11:48:21.921679   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.921689   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:21.921696   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:21.921755   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:21.957643   70686 cri.go:89] found id: ""
	I0127 11:48:21.957670   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.957679   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:21.957686   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:21.957746   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:21.992841   70686 cri.go:89] found id: ""
	I0127 11:48:21.992871   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.992881   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:21.992888   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:21.992952   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:22.028313   70686 cri.go:89] found id: ""
	I0127 11:48:22.028356   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.028365   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:22.028376   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:22.028421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:22.063653   70686 cri.go:89] found id: ""
	I0127 11:48:22.063679   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.063686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:22.063692   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:22.063749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:22.095804   70686 cri.go:89] found id: ""
	I0127 11:48:22.095831   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.095839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:22.095845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:22.095904   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:22.128161   70686 cri.go:89] found id: ""
	I0127 11:48:22.128194   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.128205   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:22.128217   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:22.128231   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:22.166325   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:22.166348   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:22.216549   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:22.216599   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:22.229716   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:22.229745   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:22.295957   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:22.295985   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:22.296000   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:24.876705   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:24.889666   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:24.889741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:24.923871   70686 cri.go:89] found id: ""
	I0127 11:48:24.923904   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.923915   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:24.923923   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:24.923983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:24.959046   70686 cri.go:89] found id: ""
	I0127 11:48:24.959078   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.959090   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:24.959098   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:24.959151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:24.994427   70686 cri.go:89] found id: ""
	I0127 11:48:24.994457   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.994468   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:24.994475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:24.994535   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:25.026201   70686 cri.go:89] found id: ""
	I0127 11:48:25.026230   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.026239   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:25.026247   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:25.026309   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:25.058228   70686 cri.go:89] found id: ""
	I0127 11:48:25.058250   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.058258   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:25.058263   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:25.058319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:25.089128   70686 cri.go:89] found id: ""
	I0127 11:48:25.089165   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.089176   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:25.089186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:25.089262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:25.124376   70686 cri.go:89] found id: ""
	I0127 11:48:25.124404   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.124411   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:25.124417   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:25.124464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:25.157926   70686 cri.go:89] found id: ""
	I0127 11:48:25.157959   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.157970   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:25.157982   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:25.157996   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:25.208316   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:25.208347   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:25.223045   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:25.223070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:25.289735   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:25.289757   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:25.289771   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:25.376030   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:25.376082   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:27.914186   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:27.926651   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:27.926716   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:27.965235   70686 cri.go:89] found id: ""
	I0127 11:48:27.965263   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.965273   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:27.965279   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:27.965334   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:27.999266   70686 cri.go:89] found id: ""
	I0127 11:48:27.999301   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.999312   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:27.999320   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:27.999377   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:28.031394   70686 cri.go:89] found id: ""
	I0127 11:48:28.031442   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.031454   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:28.031462   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:28.031524   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:28.063460   70686 cri.go:89] found id: ""
	I0127 11:48:28.063494   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.063505   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:28.063513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:28.063579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:28.098052   70686 cri.go:89] found id: ""
	I0127 11:48:28.098075   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.098082   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:28.098087   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:28.098133   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:28.132561   70686 cri.go:89] found id: ""
	I0127 11:48:28.132592   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.132601   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:28.132609   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:28.132668   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:28.173166   70686 cri.go:89] found id: ""
	I0127 11:48:28.173197   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.173206   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:28.173212   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:28.173263   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:28.207104   70686 cri.go:89] found id: ""
	I0127 11:48:28.207134   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.207144   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:28.207155   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:28.207169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:28.255860   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:28.255897   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:28.270823   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:28.270849   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:28.340536   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:28.340562   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:28.340577   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:28.424875   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:28.424910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:30.970758   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:30.987346   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:30.987422   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:31.022870   70686 cri.go:89] found id: ""
	I0127 11:48:31.022900   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.022911   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:31.022919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:31.022980   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:31.056491   70686 cri.go:89] found id: ""
	I0127 11:48:31.056519   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.056529   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:31.056537   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:31.056593   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:31.091268   70686 cri.go:89] found id: ""
	I0127 11:48:31.091301   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.091313   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:31.091320   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:31.091378   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:31.124449   70686 cri.go:89] found id: ""
	I0127 11:48:31.124479   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.124489   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:31.124497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:31.124565   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:31.167383   70686 cri.go:89] found id: ""
	I0127 11:48:31.167410   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.167418   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:31.167424   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:31.167473   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:31.205066   70686 cri.go:89] found id: ""
	I0127 11:48:31.205165   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.205185   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:31.205194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:31.205265   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:31.242101   70686 cri.go:89] found id: ""
	I0127 11:48:31.242132   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.242144   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:31.242151   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:31.242208   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:31.278496   70686 cri.go:89] found id: ""
	I0127 11:48:31.278595   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.278610   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:31.278622   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:31.278645   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:31.366805   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:31.366835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:31.366851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:31.445608   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:31.445642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:31.487502   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:31.487529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:31.566139   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:31.566171   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.080397   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:34.094121   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:34.094187   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:34.131591   70686 cri.go:89] found id: ""
	I0127 11:48:34.131635   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.131646   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:34.131654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:34.131711   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:34.167143   70686 cri.go:89] found id: ""
	I0127 11:48:34.167175   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.167185   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:34.167192   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:34.167259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:34.203241   70686 cri.go:89] found id: ""
	I0127 11:48:34.203270   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.203283   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:34.203290   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:34.203349   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:34.238023   70686 cri.go:89] found id: ""
	I0127 11:48:34.238053   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.238061   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:34.238067   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:34.238115   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:34.273362   70686 cri.go:89] found id: ""
	I0127 11:48:34.273388   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.273398   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:34.273406   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:34.273469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:34.310047   70686 cri.go:89] found id: ""
	I0127 11:48:34.310073   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.310084   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:34.310092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:34.310148   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:34.346880   70686 cri.go:89] found id: ""
	I0127 11:48:34.346914   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.346924   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:34.346932   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:34.346987   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:34.382306   70686 cri.go:89] found id: ""
	I0127 11:48:34.382327   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.382339   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:34.382348   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:34.382364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:34.494656   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:34.494697   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:34.541974   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:34.542009   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:34.619534   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:34.619584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.634607   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:34.634631   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:34.705419   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:37.206052   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:37.219444   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:37.219530   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:37.254304   70686 cri.go:89] found id: ""
	I0127 11:48:37.254334   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.254342   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:37.254349   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:37.254409   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:37.291229   70686 cri.go:89] found id: ""
	I0127 11:48:37.291264   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.291276   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:37.291289   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:37.291353   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:37.329358   70686 cri.go:89] found id: ""
	I0127 11:48:37.329381   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.329389   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:37.329394   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:37.329439   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:37.368500   70686 cri.go:89] found id: ""
	I0127 11:48:37.368529   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.368537   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:37.368543   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:37.368604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:37.400175   70686 cri.go:89] found id: ""
	I0127 11:48:37.400203   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.400213   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:37.400221   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:37.400284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:37.432661   70686 cri.go:89] found id: ""
	I0127 11:48:37.432687   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.432697   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:37.432704   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:37.432762   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:37.464843   70686 cri.go:89] found id: ""
	I0127 11:48:37.464886   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.464897   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:37.464905   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:37.464970   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:37.501795   70686 cri.go:89] found id: ""
	I0127 11:48:37.501818   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.501826   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:37.501835   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:37.501845   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:37.580256   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:37.580281   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:37.580297   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:37.658741   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:37.658790   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:37.701171   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:37.701198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:37.761906   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:37.761941   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.280848   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:40.294890   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:40.294962   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:40.333860   70686 cri.go:89] found id: ""
	I0127 11:48:40.333885   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.333904   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:40.333919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:40.333983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:40.377039   70686 cri.go:89] found id: ""
	I0127 11:48:40.377072   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.377083   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:40.377093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:40.377157   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:40.413874   70686 cri.go:89] found id: ""
	I0127 11:48:40.413899   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.413909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:40.413915   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:40.413976   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:40.453270   70686 cri.go:89] found id: ""
	I0127 11:48:40.453302   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.453313   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:40.453322   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:40.453438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:40.495704   70686 cri.go:89] found id: ""
	I0127 11:48:40.495739   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.495750   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:40.495759   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:40.495825   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:40.541078   70686 cri.go:89] found id: ""
	I0127 11:48:40.541117   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.541128   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:40.541135   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:40.541195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:40.577161   70686 cri.go:89] found id: ""
	I0127 11:48:40.577190   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.577201   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:40.577207   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:40.577267   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:40.611784   70686 cri.go:89] found id: ""
	I0127 11:48:40.611815   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.611825   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:40.611837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:40.611851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.627400   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:40.627429   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:40.697583   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:40.697609   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:40.697624   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:40.779493   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:40.779529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:40.829083   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:40.829117   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:43.382411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:43.399629   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:43.399702   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:43.433083   70686 cri.go:89] found id: ""
	I0127 11:48:43.433116   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.433127   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:43.433134   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:43.433207   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:43.471725   70686 cri.go:89] found id: ""
	I0127 11:48:43.471756   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.471788   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:43.471796   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:43.471861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:43.505911   70686 cri.go:89] found id: ""
	I0127 11:48:43.505944   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.505956   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:43.505964   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:43.506034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:43.545670   70686 cri.go:89] found id: ""
	I0127 11:48:43.545705   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.545715   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:43.545723   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:43.545773   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:43.588086   70686 cri.go:89] found id: ""
	I0127 11:48:43.588113   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.588124   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:43.588131   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:43.588193   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:43.626703   70686 cri.go:89] found id: ""
	I0127 11:48:43.626739   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.626747   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:43.626754   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:43.626810   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:43.666123   70686 cri.go:89] found id: ""
	I0127 11:48:43.666155   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.666164   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:43.666171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:43.666237   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:43.701503   70686 cri.go:89] found id: ""
	I0127 11:48:43.701527   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.701537   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:43.701548   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:43.701561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:43.752145   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:43.752177   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:43.766551   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:43.766579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:43.838715   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:43.838740   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:43.838753   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:43.923406   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:43.923439   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:46.470479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:46.483541   70686 kubeadm.go:597] duration metric: took 4m2.154865283s to restartPrimaryControlPlane
	W0127 11:48:46.483635   70686 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:48:46.483664   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:46.956612   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:46.970448   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:46.979726   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:46.990401   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:46.990418   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:46.990456   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:48:46.999850   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:46.999921   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:47.009371   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:48:47.019126   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:47.019177   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:47.029905   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.040611   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:47.040690   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.051767   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:48:47.063007   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:47.063076   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:47.074431   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:47.304989   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:50:43.920463   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:50:43.920584   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:50:43.922146   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:43.922214   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:43.922320   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:43.922480   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:43.922613   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:43.922673   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:43.924430   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:43.924530   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:43.924611   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:43.924680   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:43.924766   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:43.924851   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:43.924925   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:43.924977   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:43.925025   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:43.925150   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:43.925259   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:43.925316   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:43.925398   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:43.925467   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:43.925544   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:43.925633   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:43.925704   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:43.925839   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:43.925952   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:43.926012   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:43.926098   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:43.927567   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:43.927670   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:43.927749   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:43.927813   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:43.927885   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:43.928078   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:50:43.928123   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:50:43.928184   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928340   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928398   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928569   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928631   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928792   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928850   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929077   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929185   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929391   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929402   70686 kubeadm.go:310] 
	I0127 11:50:43.929456   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:50:43.929518   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:50:43.929531   70686 kubeadm.go:310] 
	I0127 11:50:43.929584   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:50:43.929647   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:50:43.929784   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:50:43.929800   70686 kubeadm.go:310] 
	I0127 11:50:43.929915   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:50:43.929961   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:50:43.930009   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:50:43.930019   70686 kubeadm.go:310] 
	I0127 11:50:43.930137   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:50:43.930253   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:50:43.930266   70686 kubeadm.go:310] 
	I0127 11:50:43.930419   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:50:43.930528   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:50:43.930621   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:50:43.930695   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:50:43.930745   70686 kubeadm.go:310] 
	W0127 11:50:43.930804   70686 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 11:50:43.930840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:50:44.381980   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:50:44.397504   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:50:44.407258   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:50:44.407280   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:50:44.407331   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:50:44.416517   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:50:44.416588   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:50:44.425543   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:50:44.433996   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:50:44.434043   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:50:44.442792   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.452342   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:50:44.452410   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.462650   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:50:44.471925   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:50:44.471985   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:50:44.481004   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:50:44.552326   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:44.552414   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:44.696875   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:44.697032   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:44.697169   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:44.872468   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:44.875109   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:44.875201   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:44.875263   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:44.875350   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:44.875402   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:44.875466   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:44.875514   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:44.875570   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:44.875679   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:44.875792   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:44.875910   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:44.875976   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:44.876030   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:45.015504   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:45.106020   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:45.326707   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:45.574018   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:45.595960   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:45.597194   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:45.597402   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:45.740527   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:45.743100   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:45.743237   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:45.746496   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:45.747484   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:45.748125   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:45.750039   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:51:25.751949   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:51:25.752243   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:25.752539   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:30.752865   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:30.753104   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:40.753548   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:40.753726   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:00.754215   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:00.754448   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753038   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:40.753327   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753353   70686 kubeadm.go:310] 
	I0127 11:52:40.753414   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:52:40.753473   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:52:40.753483   70686 kubeadm.go:310] 
	I0127 11:52:40.753541   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:52:40.753590   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:52:40.753730   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:52:40.753743   70686 kubeadm.go:310] 
	I0127 11:52:40.753898   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:52:40.753957   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:52:40.754014   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:52:40.754030   70686 kubeadm.go:310] 
	I0127 11:52:40.754195   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:52:40.754312   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:52:40.754321   70686 kubeadm.go:310] 
	I0127 11:52:40.754453   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:52:40.754573   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:52:40.754670   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:52:40.754766   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:52:40.754777   70686 kubeadm.go:310] 
	I0127 11:52:40.755376   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:52:40.755478   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:52:40.755572   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:52:40.755648   70686 kubeadm.go:394] duration metric: took 7m56.47359007s to StartCluster
	I0127 11:52:40.755695   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:52:40.755757   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:52:40.792993   70686 cri.go:89] found id: ""
	I0127 11:52:40.793026   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.793045   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:52:40.793055   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:52:40.793116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:52:40.832368   70686 cri.go:89] found id: ""
	I0127 11:52:40.832397   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.832410   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:52:40.832417   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:52:40.832478   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:52:40.865175   70686 cri.go:89] found id: ""
	I0127 11:52:40.865199   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.865208   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:52:40.865215   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:52:40.865280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:52:40.896556   70686 cri.go:89] found id: ""
	I0127 11:52:40.896586   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.896594   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:52:40.896600   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:52:40.896648   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:52:40.928729   70686 cri.go:89] found id: ""
	I0127 11:52:40.928765   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.928777   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:52:40.928784   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:52:40.928852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:52:40.962998   70686 cri.go:89] found id: ""
	I0127 11:52:40.963029   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.963039   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:52:40.963053   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:52:40.963111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:52:40.994577   70686 cri.go:89] found id: ""
	I0127 11:52:40.994606   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.994616   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:52:40.994623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:52:40.994669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:52:41.030825   70686 cri.go:89] found id: ""
	I0127 11:52:41.030861   70686 logs.go:282] 0 containers: []
	W0127 11:52:41.030872   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:52:41.030884   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:52:41.030900   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:52:41.084683   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:52:41.084714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:52:41.098908   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:52:41.098946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:52:41.176430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:52:41.176453   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:52:41.176465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:52:41.290183   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:52:41.290219   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 11:52:41.336066   70686 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 11:52:41.336124   70686 out.go:270] * 
	* 
	W0127 11:52:41.336202   70686 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.336227   70686 out.go:270] * 
	* 
	W0127 11:52:41.337558   70686 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 11:52:41.341361   70686 out.go:201] 
	W0127 11:52:41.342596   70686 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.342686   70686 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 11:52:41.342709   70686 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 11:52:41.344162   70686 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-570778 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (239.181311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-570778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-570778 logs -n 25: (1.010282001s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-429764 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | disable-driver-mounts-429764                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:41 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-273200             | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-986409            | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-407489  | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:43 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273200                  | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-986409                 | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC | 27 Jan 25 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570778        | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-407489       | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC | 27 Jan 25 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC |                     |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570778             | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:44:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:44:15.929598   70686 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:44:15.929689   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929697   70686 out.go:358] Setting ErrFile to fd 2...
	I0127 11:44:15.929701   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929887   70686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:44:15.930463   70686 out.go:352] Setting JSON to false
	I0127 11:44:15.931400   70686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8756,"bootTime":1737969500,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:44:15.931492   70686 start.go:139] virtualization: kvm guest
	I0127 11:44:15.933961   70686 out.go:177] * [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:44:15.935491   70686 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:44:15.935496   70686 notify.go:220] Checking for updates...
	I0127 11:44:15.938050   70686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:44:15.939411   70686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:15.940688   70686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:44:15.942034   70686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:44:15.943410   70686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:44:12.181135   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.681538   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:15.945138   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:15.945529   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.945574   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.962483   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0127 11:44:15.963003   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.963519   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.963555   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.963966   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.964195   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:15.965767   70686 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:44:15.966927   70686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:44:15.967285   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.967321   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.981938   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0127 11:44:15.982353   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.982892   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.982918   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.983289   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.984121   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.021180   70686 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:44:16.022570   70686 start.go:297] selected driver: kvm2
	I0127 11:44:16.022584   70686 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
70778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.022687   70686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:44:16.023358   70686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.023431   70686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:44:16.038219   70686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:44:16.038645   70686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:44:16.038674   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:16.038706   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:16.038739   70686 start.go:340] cluster config:
	{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.038822   70686 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.041030   70686 out.go:177] * Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	I0127 11:44:16.042127   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:16.042176   70686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:44:16.042189   70686 cache.go:56] Caching tarball of preloaded images
	I0127 11:44:16.042300   70686 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:44:16.042314   70686 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 11:44:16.042429   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:16.042632   70686 start.go:360] acquireMachinesLock for old-k8s-version-570778: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:44:16.042691   70686 start.go:364] duration metric: took 36.964µs to acquireMachinesLock for "old-k8s-version-570778"
	I0127 11:44:16.042707   70686 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:44:16.042713   70686 fix.go:54] fixHost starting: 
	I0127 11:44:16.043141   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:16.043185   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:16.057334   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0127 11:44:16.057814   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:16.058319   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:16.058342   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:16.059617   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:16.060717   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.060891   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetState
	I0127 11:44:16.062560   70686 fix.go:112] recreateIfNeeded on old-k8s-version-570778: state=Stopped err=<nil>
	I0127 11:44:16.062584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	W0127 11:44:16.062740   70686 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:44:16.064407   70686 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570778" ...
	I0127 11:44:14.581269   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.080972   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.765953   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.266323   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:16.065876   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .Start
	I0127 11:44:16.066119   70686 main.go:141] libmachine: (old-k8s-version-570778) starting domain...
	I0127 11:44:16.066142   70686 main.go:141] libmachine: (old-k8s-version-570778) ensuring networks are active...
	I0127 11:44:16.066789   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network default is active
	I0127 11:44:16.067106   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network mk-old-k8s-version-570778 is active
	I0127 11:44:16.067438   70686 main.go:141] libmachine: (old-k8s-version-570778) getting domain XML...
	I0127 11:44:16.068030   70686 main.go:141] libmachine: (old-k8s-version-570778) creating domain...
	I0127 11:44:17.326422   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for IP...
	I0127 11:44:17.327356   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.327887   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.327973   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.327883   70721 retry.go:31] will retry after 224.653843ms: waiting for domain to come up
	I0127 11:44:17.554516   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.555006   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.555033   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.554963   70721 retry.go:31] will retry after 278.652732ms: waiting for domain to come up
	I0127 11:44:17.835676   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.836235   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.836263   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.836216   70721 retry.go:31] will retry after 413.765366ms: waiting for domain to come up
	I0127 11:44:18.251786   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.252318   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.252359   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.252291   70721 retry.go:31] will retry after 384.166802ms: waiting for domain to come up
	I0127 11:44:18.637567   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.638099   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.638123   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.638055   70721 retry.go:31] will retry after 472.449239ms: waiting for domain to come up
	I0127 11:44:19.112411   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.112876   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.112900   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.112842   70721 retry.go:31] will retry after 883.60392ms: waiting for domain to come up
	I0127 11:44:19.997950   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.998399   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.998421   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.998373   70721 retry.go:31] will retry after 736.173761ms: waiting for domain to come up
	I0127 11:44:20.736442   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:20.736964   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:20.737021   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:20.736930   70721 retry.go:31] will retry after 1.379977469s: waiting for domain to come up
	I0127 11:44:17.182032   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.184122   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.581213   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.079928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.765581   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.265882   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.118774   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:22.119315   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:22.119346   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:22.119278   70721 retry.go:31] will retry after 1.846963021s: waiting for domain to come up
	I0127 11:44:23.968284   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:23.968756   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:23.968788   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:23.968709   70721 retry.go:31] will retry after 1.595738144s: waiting for domain to come up
	I0127 11:44:25.565970   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:25.566464   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:25.566496   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:25.566430   70721 retry.go:31] will retry after 2.837671431s: waiting for domain to come up
	I0127 11:44:21.681373   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:23.682555   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.080232   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.080547   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.764338   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.766071   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.405715   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:28.406305   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:28.406335   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:28.406277   70721 retry.go:31] will retry after 3.421231106s: waiting for domain to come up
	I0127 11:44:26.181747   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.681419   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.681567   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.081045   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.579496   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.580035   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:29.264366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.264892   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.828582   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:31.829032   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:31.829085   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:31.829004   70721 retry.go:31] will retry after 3.418527811s: waiting for domain to come up
	I0127 11:44:35.249695   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250229   70686 main.go:141] libmachine: (old-k8s-version-570778) found domain IP: 192.168.50.193
	I0127 11:44:35.250264   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has current primary IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250273   70686 main.go:141] libmachine: (old-k8s-version-570778) reserving static IP address...
	I0127 11:44:35.250765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.250797   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | skip adding static IP to network mk-old-k8s-version-570778 - found existing host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"}
	I0127 11:44:35.250814   70686 main.go:141] libmachine: (old-k8s-version-570778) reserved static IP address 192.168.50.193 for domain old-k8s-version-570778
	I0127 11:44:35.250832   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for SSH...
	I0127 11:44:35.250848   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Getting to WaitForSSH function...
	I0127 11:44:35.253216   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253538   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.253571   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253691   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH client type: external
	I0127 11:44:35.253719   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa (-rw-------)
	I0127 11:44:35.253750   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:44:35.253765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | About to run SSH command:
	I0127 11:44:35.253782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | exit 0
	I0127 11:44:35.375237   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | SSH cmd err, output: <nil>: 
	I0127 11:44:35.375580   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:44:35.376204   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.378824   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379163   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.379195   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379421   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:35.379692   70686 machine.go:93] provisionDockerMachine start ...
	I0127 11:44:35.379720   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:35.379910   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.382057   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382361   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.382392   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382559   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.382738   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.382901   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.383079   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.383243   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.383528   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.383542   70686 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:44:35.483536   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:44:35.483585   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.483889   70686 buildroot.go:166] provisioning hostname "old-k8s-version-570778"
	I0127 11:44:35.483924   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.484119   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.487189   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487543   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.487569   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487813   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.488019   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488147   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488310   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.488454   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.488629   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.488641   70686 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570778 && echo "old-k8s-version-570778" | sudo tee /etc/hostname
	I0127 11:44:35.606107   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570778
	
	I0127 11:44:35.606140   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.609822   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610293   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.610329   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610472   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.610663   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610815   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610983   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.611167   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.611325   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.611342   70686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570778/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:44:35.720742   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:44:35.720779   70686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:44:35.720803   70686 buildroot.go:174] setting up certificates
	I0127 11:44:35.720814   70686 provision.go:84] configureAuth start
	I0127 11:44:35.720826   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.721065   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.723782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724254   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.724290   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724483   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.726871   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.727196   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727322   70686 provision.go:143] copyHostCerts
	I0127 11:44:35.727369   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:44:35.727384   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:44:35.727452   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:44:35.727537   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:44:35.727545   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:44:35.727569   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:44:35.727649   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:44:35.727659   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:44:35.727686   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:44:35.727741   70686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570778 san=[127.0.0.1 192.168.50.193 localhost minikube old-k8s-version-570778]
	I0127 11:44:35.901422   70686 provision.go:177] copyRemoteCerts
	I0127 11:44:35.901473   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:44:35.901501   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.904015   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904354   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.904378   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904597   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.904771   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.904967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.905126   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:32.681781   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:34.682249   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.078928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.079470   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.985261   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:44:36.008090   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:44:36.031357   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:44:36.053784   70686 provision.go:87] duration metric: took 332.958985ms to configureAuth
	I0127 11:44:36.053812   70686 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:44:36.053986   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:36.054066   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.056825   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.057186   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057398   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.057612   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057801   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.058191   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.058400   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.058425   70686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:44:36.280974   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:44:36.281007   70686 machine.go:96] duration metric: took 901.295604ms to provisionDockerMachine
	I0127 11:44:36.281020   70686 start.go:293] postStartSetup for "old-k8s-version-570778" (driver="kvm2")
	I0127 11:44:36.281033   70686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:44:36.281048   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.281334   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:44:36.281366   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.283980   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284452   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.284493   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284602   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.284759   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.284915   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.285033   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.361994   70686 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:44:36.366066   70686 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:44:36.366085   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:44:36.366142   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:44:36.366211   70686 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:44:36.366293   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:44:36.374729   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:36.396427   70686 start.go:296] duration metric: took 115.392742ms for postStartSetup
	I0127 11:44:36.396468   70686 fix.go:56] duration metric: took 20.353754717s for fixHost
	I0127 11:44:36.396491   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.399680   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400070   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.400097   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400246   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.400438   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400591   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400821   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.401019   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.401189   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.401200   70686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:44:36.500185   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978276.474640374
	
	I0127 11:44:36.500211   70686 fix.go:216] guest clock: 1737978276.474640374
	I0127 11:44:36.500221   70686 fix.go:229] Guest: 2025-01-27 11:44:36.474640374 +0000 UTC Remote: 2025-01-27 11:44:36.396473102 +0000 UTC m=+20.504127240 (delta=78.167272ms)
	I0127 11:44:36.500239   70686 fix.go:200] guest clock delta is within tolerance: 78.167272ms
	I0127 11:44:36.500256   70686 start.go:83] releasing machines lock for "old-k8s-version-570778", held for 20.457556974s
	I0127 11:44:36.500274   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.500555   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:36.503395   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503819   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.503860   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503969   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504404   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504676   70686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:44:36.504723   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.504778   70686 ssh_runner.go:195] Run: cat /version.json
	I0127 11:44:36.504802   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.507787   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.507815   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508140   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508175   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508207   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508225   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508347   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508547   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508557   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508735   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.508749   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508887   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.509027   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.509185   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.584389   70686 ssh_runner.go:195] Run: systemctl --version
	I0127 11:44:36.606466   70686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:44:36.746477   70686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:44:36.751936   70686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:44:36.751996   70686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:44:36.768698   70686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:44:36.768722   70686 start.go:495] detecting cgroup driver to use...
	I0127 11:44:36.768788   70686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:44:36.786842   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:44:36.799832   70686 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:44:36.799893   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:44:36.813751   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:44:36.827731   70686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:44:36.943310   70686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:44:37.088722   70686 docker.go:233] disabling docker service ...
	I0127 11:44:37.088789   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:44:37.103240   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:44:37.116205   70686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:44:37.254006   70686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:44:37.365382   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:44:37.379019   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:44:37.396330   70686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 11:44:37.396405   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.406845   70686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:44:37.406919   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.417968   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.428079   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.438133   70686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:44:37.448951   70686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:44:37.458320   70686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:44:37.458382   70686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:44:37.476279   70686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:44:37.486232   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:37.609635   70686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:44:37.703117   70686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:44:37.703185   70686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:44:37.707780   70686 start.go:563] Will wait 60s for crictl version
	I0127 11:44:37.707827   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:37.711561   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:44:37.746285   70686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:44:37.746370   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.774346   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.804220   70686 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 11:44:33.764774   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.764854   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.765730   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.805652   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:37.808777   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809130   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:37.809168   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809355   70686 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:44:37.813621   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:37.826271   70686 kubeadm.go:883] updating cluster {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:44:37.826370   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:37.826406   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:37.875128   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:37.875204   70686 ssh_runner.go:195] Run: which lz4
	I0127 11:44:37.879162   70686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:44:37.883378   70686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:44:37.883408   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:44:39.317688   70686 crio.go:462] duration metric: took 1.438551878s to copy over tarball
	I0127 11:44:39.317750   70686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:44:37.181878   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.183457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.081149   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:41.579699   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.767830   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.265799   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.264081   70686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946305063s)
	I0127 11:44:42.264109   70686 crio.go:469] duration metric: took 2.946394656s to extract the tarball
	I0127 11:44:42.264117   70686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:44:42.307411   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:42.344143   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:42.344169   70686 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:44:42.344233   70686 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.344271   70686 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.344279   70686 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.344249   70686 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.344344   70686 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.344362   70686 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:44:42.344415   70686 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.344314   70686 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.345773   70686 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.346448   70686 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.346465   70686 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.346547   70686 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.488970   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.490931   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.497125   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.504183   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.508337   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.519103   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.523858   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:44:42.600152   70686 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:44:42.600208   70686 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.600258   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629803   70686 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:44:42.629847   70686 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.629897   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629956   70686 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:44:42.629990   70686 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.630029   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656649   70686 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:44:42.656693   70686 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.656693   70686 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:44:42.656723   70686 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.656736   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656763   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.669267   70686 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:44:42.669313   70686 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.669350   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677774   70686 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:44:42.677823   70686 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:44:42.677876   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.677890   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677969   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.677987   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.678027   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.678039   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.678069   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.787131   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.787197   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.787314   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.813675   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.816360   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.816416   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.816437   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.930195   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.930298   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.930333   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.930346   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.971335   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.971389   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.971398   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:43.068772   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:44:43.068871   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:43.068882   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:44:43.068892   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:44:43.097755   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:44:43.097781   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:44:43.099343   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:44:43.116136   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:44:43.303986   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:43.439716   70686 cache_images.go:92] duration metric: took 1.095530522s to LoadCachedImages
	W0127 11:44:43.439813   70686 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 11:44:43.439832   70686 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.20.0 crio true true} ...
	I0127 11:44:43.439974   70686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570778 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:44:43.440069   70686 ssh_runner.go:195] Run: crio config
	I0127 11:44:43.491732   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:43.491754   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:43.491765   70686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:44:43.491782   70686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570778 NodeName:old-k8s-version-570778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:44:43.491897   70686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570778"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:44:43.491951   70686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:44:43.501539   70686 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:44:43.501593   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:44:43.510444   70686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 11:44:43.526994   70686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:44:43.542977   70686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:44:43.559986   70686 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 11:44:43.564089   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:43.576120   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:43.702431   70686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:44:43.719740   70686 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778 for IP: 192.168.50.193
	I0127 11:44:43.719759   70686 certs.go:194] generating shared ca certs ...
	I0127 11:44:43.719773   70686 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:43.719941   70686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:44:43.720011   70686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:44:43.720024   70686 certs.go:256] generating profile certs ...
	I0127 11:44:43.810274   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key
	I0127 11:44:43.810422   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f
	I0127 11:44:43.810480   70686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key
	I0127 11:44:43.810641   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:44:43.810684   70686 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:44:43.810697   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:44:43.810727   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:44:43.810761   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:44:43.810789   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:44:43.810838   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:43.811665   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:44:43.856247   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:44:43.898135   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:44:43.938193   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:44:43.960927   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:44:43.984028   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:44:44.008415   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:44:44.030915   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:44:44.055340   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:44:44.077556   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:44:44.101525   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:44:44.124400   70686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:44:44.140292   70686 ssh_runner.go:195] Run: openssl version
	I0127 11:44:44.145827   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:44:44.155834   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.159949   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.160022   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.165584   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:44:44.178174   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:44:44.189759   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.194947   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.195006   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.200696   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:44:44.211199   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:44:44.221194   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225257   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225297   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.230582   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:44:44.240578   70686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:44:44.245082   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:44:44.252016   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:44:44.257760   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:44:44.264902   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:44:44.270934   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:44:44.276642   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:44:44.282062   70686 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:44.282152   70686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:44:44.282190   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.318594   70686 cri.go:89] found id: ""
	I0127 11:44:44.318650   70686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:44:44.328642   70686 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:44:44.328665   70686 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:44:44.328716   70686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:44:44.337760   70686 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:44:44.338436   70686 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:44.338787   70686 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570778" cluster setting kubeconfig missing "old-k8s-version-570778" context setting]
	I0127 11:44:44.339275   70686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:44.379353   70686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:44:44.389831   70686 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0127 11:44:44.389864   70686 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:44:44.389876   70686 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:44:44.389917   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.429276   70686 cri.go:89] found id: ""
	I0127 11:44:44.429352   70686 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:44:44.446502   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:44:44.456332   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:44:44.456358   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:44:44.456406   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:44:44.465009   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:44:44.465064   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:44:44.474468   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:44:44.483271   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:44:44.483333   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:44:44.493091   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.501826   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:44:44.501887   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.511619   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:44:44.520146   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:44:44.520215   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:44:44.529284   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:44:44.538474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:44.669112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.430626   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.649318   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.747035   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.834253   70686 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:44:45.834345   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:41.682339   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.682496   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.911112   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.080526   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:44.265972   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.765113   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.334836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.834834   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.334682   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.834945   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.335112   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.834442   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.335101   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.835321   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.334868   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.835371   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.181944   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.681423   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.580901   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.079391   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:49.265367   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.765180   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.335142   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.835388   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.334604   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.835044   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.334680   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.834411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.335010   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.834554   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.181432   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.681540   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.081988   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:55.580478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:54.265141   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.265203   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.265900   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.335128   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.335140   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.835042   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.334817   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.834443   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.334777   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.835437   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.334852   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.834590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.182005   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.681494   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.079513   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.079905   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:02.080706   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.765897   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.265622   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:01.335351   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.835115   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.334828   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.834481   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.334592   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.834653   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.335201   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.834728   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.334872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.835121   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.181668   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.182704   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.681195   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:04.579620   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.079240   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.765054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.765605   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:06.335002   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:06.835393   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.334717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.835225   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.335465   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.835195   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.335007   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.835362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.334590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.835441   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.180735   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.181326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:09.079806   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.081218   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.264844   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:12.765530   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.334541   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:11.835283   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.335343   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.834836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.335067   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.834637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.334394   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.834608   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.835178   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.181440   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:14.182012   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:13.579850   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.580199   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.265832   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:17.765291   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:16.334479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.835000   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.335139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.835227   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.335309   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.835170   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.334384   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.835348   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.334845   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.835383   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.681535   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.181289   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:18.080468   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:20.579930   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.580421   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.765695   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.264793   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:21.335090   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.834734   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.335362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.834567   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.335485   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.835040   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.334533   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.834544   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.334975   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.834941   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.682460   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.181465   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:25.080118   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:27.579811   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.265167   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.265742   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.334897   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.834607   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.334771   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.335354   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.834876   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.335076   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.334594   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.834603   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.181841   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.680961   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:30.079284   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:32.079751   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.765734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.266015   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.335153   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.834967   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.335109   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.834477   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.335107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.835110   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.334563   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.835358   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.334401   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.835107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.185937   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.680940   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:35.681777   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:34.580737   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:37.080749   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.765617   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.265646   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:38.266295   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.335163   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:36.835139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.334510   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.834447   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.334776   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.834844   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.334806   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.835253   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.334905   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.834948   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.682410   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.182049   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:39.579328   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.580544   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.765177   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:43.265601   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.334866   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:41.834518   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.335359   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.834415   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.335098   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.834540   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.335306   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.834575   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.335244   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.835032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:45.835116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:45.868609   70686 cri.go:89] found id: ""
	I0127 11:45:45.868640   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.868652   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:45.868659   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:45.868718   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:45.907767   70686 cri.go:89] found id: ""
	I0127 11:45:45.907796   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.907805   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:45.907812   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:45.907870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:42.182202   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.680856   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.079255   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:46.079779   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.765111   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:47.765359   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.940736   70686 cri.go:89] found id: ""
	I0127 11:45:45.940781   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.940791   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:45.940800   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:45.940945   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:45.972511   70686 cri.go:89] found id: ""
	I0127 11:45:45.972536   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.972544   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:45.972550   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:45.972621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:46.004929   70686 cri.go:89] found id: ""
	I0127 11:45:46.004958   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.004966   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:46.004971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:46.005020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:46.037172   70686 cri.go:89] found id: ""
	I0127 11:45:46.037205   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.037217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:46.037224   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:46.037284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:46.070282   70686 cri.go:89] found id: ""
	I0127 11:45:46.070311   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.070322   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:46.070330   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:46.070387   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:46.106109   70686 cri.go:89] found id: ""
	I0127 11:45:46.106139   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.106150   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:46.106163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:46.106176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:46.147686   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:46.147719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:46.199085   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:46.199119   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:46.212487   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:46.212515   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:46.331675   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:46.331698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:46.331710   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:48.902413   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:48.915872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:48.915933   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:48.950168   70686 cri.go:89] found id: ""
	I0127 11:45:48.950215   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.950223   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:48.950229   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:48.950280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:48.981915   70686 cri.go:89] found id: ""
	I0127 11:45:48.981947   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.981958   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:48.981966   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:48.982030   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:49.022418   70686 cri.go:89] found id: ""
	I0127 11:45:49.022448   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.022461   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:49.022468   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:49.022531   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:49.066138   70686 cri.go:89] found id: ""
	I0127 11:45:49.066164   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.066174   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:49.066181   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:49.066240   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:49.107856   70686 cri.go:89] found id: ""
	I0127 11:45:49.107887   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.107895   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:49.107901   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:49.107951   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:49.158460   70686 cri.go:89] found id: ""
	I0127 11:45:49.158492   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.158519   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:49.158545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:49.158608   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:49.194805   70686 cri.go:89] found id: ""
	I0127 11:45:49.194831   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.194839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:49.194844   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:49.194889   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:49.227445   70686 cri.go:89] found id: ""
	I0127 11:45:49.227475   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.227483   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:49.227491   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:49.227502   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:49.280386   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:49.280418   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:49.293755   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:49.293785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:49.366338   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:49.366366   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:49.366381   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:49.444064   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:49.444102   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:47.182717   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:49.681160   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:48.080162   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.579311   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.580182   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.266104   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.266221   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:51.990077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:52.002185   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:52.002244   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:52.033585   70686 cri.go:89] found id: ""
	I0127 11:45:52.033608   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.033616   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:52.033622   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:52.033671   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:52.063740   70686 cri.go:89] found id: ""
	I0127 11:45:52.063766   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.063776   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:52.063784   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:52.063846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:52.098052   70686 cri.go:89] found id: ""
	I0127 11:45:52.098089   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.098115   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:52.098122   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:52.098186   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:52.130011   70686 cri.go:89] found id: ""
	I0127 11:45:52.130039   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.130048   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:52.130057   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:52.130101   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:52.163864   70686 cri.go:89] found id: ""
	I0127 11:45:52.163887   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.163894   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:52.163899   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:52.163946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:52.195990   70686 cri.go:89] found id: ""
	I0127 11:45:52.196020   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.196029   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:52.196034   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:52.196079   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:52.227747   70686 cri.go:89] found id: ""
	I0127 11:45:52.227780   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.227792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:52.227799   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:52.227860   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:52.262186   70686 cri.go:89] found id: ""
	I0127 11:45:52.262214   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.262224   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:52.262234   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:52.262249   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:52.318567   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:52.318603   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:52.332621   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:52.332646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:52.403429   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:52.403451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:52.403462   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:52.482267   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:52.482309   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.018478   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:55.032583   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:55.032655   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:55.070418   70686 cri.go:89] found id: ""
	I0127 11:45:55.070446   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.070454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:55.070460   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:55.070534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:55.102785   70686 cri.go:89] found id: ""
	I0127 11:45:55.102820   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.102831   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:55.102837   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:55.102893   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:55.140432   70686 cri.go:89] found id: ""
	I0127 11:45:55.140466   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.140477   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:55.140483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:55.140548   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:55.173071   70686 cri.go:89] found id: ""
	I0127 11:45:55.173097   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.173107   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:55.173115   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:55.173175   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:55.207834   70686 cri.go:89] found id: ""
	I0127 11:45:55.207867   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.207878   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:55.207886   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:55.207949   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:55.240758   70686 cri.go:89] found id: ""
	I0127 11:45:55.240786   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.240794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:55.240807   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:55.240852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:55.276038   70686 cri.go:89] found id: ""
	I0127 11:45:55.276067   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.276078   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:55.276085   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:55.276135   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:55.307786   70686 cri.go:89] found id: ""
	I0127 11:45:55.307818   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.307829   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:55.307841   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:55.307855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:55.384874   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:55.384908   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.425141   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:55.425169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:55.479108   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:55.479144   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:55.492988   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:55.493018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:55.557856   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:51.681649   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:53.681709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.580408   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:57.079629   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.765284   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:56.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.059727   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:58.072633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:58.072713   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:58.107460   70686 cri.go:89] found id: ""
	I0127 11:45:58.107494   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.107505   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:58.107513   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:58.107570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:58.143678   70686 cri.go:89] found id: ""
	I0127 11:45:58.143709   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.143721   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:58.143729   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:58.143794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:58.177914   70686 cri.go:89] found id: ""
	I0127 11:45:58.177942   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.177949   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:58.177957   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:58.178003   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:58.210641   70686 cri.go:89] found id: ""
	I0127 11:45:58.210679   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.210690   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:58.210698   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:58.210759   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:58.242373   70686 cri.go:89] found id: ""
	I0127 11:45:58.242408   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.242420   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:58.242427   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:58.242494   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:58.277921   70686 cri.go:89] found id: ""
	I0127 11:45:58.277954   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.277965   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:58.277973   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:58.278033   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:58.310342   70686 cri.go:89] found id: ""
	I0127 11:45:58.310373   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.310384   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:58.310391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:58.310459   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:58.345616   70686 cri.go:89] found id: ""
	I0127 11:45:58.345649   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.345660   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:58.345671   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:58.345687   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:58.380655   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:58.380680   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:58.433828   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:58.433859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:58.447666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:58.447703   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:58.510668   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:58.510698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:58.510714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:56.181754   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.682655   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.080820   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.580837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.266054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.766023   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.087242   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:01.099871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:01.099926   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:01.132252   70686 cri.go:89] found id: ""
	I0127 11:46:01.132285   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.132293   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:01.132298   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:01.132348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:01.163920   70686 cri.go:89] found id: ""
	I0127 11:46:01.163949   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.163960   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:01.163967   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:01.164034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:01.198833   70686 cri.go:89] found id: ""
	I0127 11:46:01.198858   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.198865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:01.198871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:01.198916   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:01.238722   70686 cri.go:89] found id: ""
	I0127 11:46:01.238753   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.238763   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:01.238779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:01.238844   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:01.272868   70686 cri.go:89] found id: ""
	I0127 11:46:01.272892   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.272898   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:01.272903   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:01.272947   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:01.307986   70686 cri.go:89] found id: ""
	I0127 11:46:01.308015   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.308024   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:01.308029   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:01.308082   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:01.341997   70686 cri.go:89] found id: ""
	I0127 11:46:01.342027   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.342039   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:01.342047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:01.342109   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:01.374940   70686 cri.go:89] found id: ""
	I0127 11:46:01.374968   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.374978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:01.374989   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:01.375002   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:01.428465   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:01.428500   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:01.442684   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:01.442708   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:01.512159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.512185   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:01.512198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:01.586215   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:01.586265   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.127745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:04.140798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:04.140873   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:04.175150   70686 cri.go:89] found id: ""
	I0127 11:46:04.175186   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.175197   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:04.175204   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:04.175282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:04.210697   70686 cri.go:89] found id: ""
	I0127 11:46:04.210727   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.210736   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:04.210744   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:04.210800   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:04.240777   70686 cri.go:89] found id: ""
	I0127 11:46:04.240803   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.240811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:04.240821   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:04.240865   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:04.273040   70686 cri.go:89] found id: ""
	I0127 11:46:04.273076   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.273087   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:04.273094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:04.273151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:04.308441   70686 cri.go:89] found id: ""
	I0127 11:46:04.308468   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.308478   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:04.308484   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:04.308546   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:04.346756   70686 cri.go:89] found id: ""
	I0127 11:46:04.346783   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.346793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:04.346802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:04.346870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:04.381718   70686 cri.go:89] found id: ""
	I0127 11:46:04.381747   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.381758   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:04.381766   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:04.381842   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:04.415875   70686 cri.go:89] found id: ""
	I0127 11:46:04.415913   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.415921   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:04.415930   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:04.415942   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:04.499951   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:04.499990   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.539557   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:04.539592   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:04.595977   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:04.596011   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:04.609081   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:04.609107   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:04.678937   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.181382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.681326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:05.682184   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.581478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.079382   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:04.266171   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.765288   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:07.179760   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:07.193186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:07.193259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:07.226455   70686 cri.go:89] found id: ""
	I0127 11:46:07.226487   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.226498   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:07.226507   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:07.226570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:07.259391   70686 cri.go:89] found id: ""
	I0127 11:46:07.259427   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.259439   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:07.259447   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:07.259520   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:07.295281   70686 cri.go:89] found id: ""
	I0127 11:46:07.295314   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.295326   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:07.295334   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:07.295384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:07.330145   70686 cri.go:89] found id: ""
	I0127 11:46:07.330177   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.330186   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:07.330194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:07.330260   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:07.368846   70686 cri.go:89] found id: ""
	I0127 11:46:07.368875   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.368882   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:07.368889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:07.368938   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:07.404802   70686 cri.go:89] found id: ""
	I0127 11:46:07.404832   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.404843   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:07.404851   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:07.404914   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:07.437053   70686 cri.go:89] found id: ""
	I0127 11:46:07.437081   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.437090   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:07.437096   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:07.437142   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:07.474455   70686 cri.go:89] found id: ""
	I0127 11:46:07.474482   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.474490   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:07.474498   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:07.474510   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:07.529193   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:07.529229   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:07.543329   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:07.543365   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:07.623019   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:07.623043   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:07.623057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:07.701237   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:07.701277   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:10.239258   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:10.252360   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:10.252423   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:10.288112   70686 cri.go:89] found id: ""
	I0127 11:46:10.288135   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.288143   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:10.288149   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:10.288195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:10.323260   70686 cri.go:89] found id: ""
	I0127 11:46:10.323288   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.323296   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:10.323302   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:10.323358   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:10.358662   70686 cri.go:89] found id: ""
	I0127 11:46:10.358686   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.358694   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:10.358700   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:10.358744   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:10.397231   70686 cri.go:89] found id: ""
	I0127 11:46:10.397262   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.397273   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:10.397281   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:10.397384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:10.430384   70686 cri.go:89] found id: ""
	I0127 11:46:10.430411   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.430419   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:10.430425   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:10.430490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:10.461361   70686 cri.go:89] found id: ""
	I0127 11:46:10.461387   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.461396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:10.461404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:10.461464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:10.497276   70686 cri.go:89] found id: ""
	I0127 11:46:10.497309   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.497318   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:10.497324   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:10.497389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:10.530718   70686 cri.go:89] found id: ""
	I0127 11:46:10.530751   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.530762   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:10.530772   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:10.530785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:10.578801   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:10.578839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:10.591288   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:10.591312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:10.655021   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:10.655051   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:10.655065   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:10.731115   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:10.731151   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:08.181149   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.681951   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.079678   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.079837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:12.580869   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:11.265066   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.265843   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.267173   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:13.280623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:13.280688   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:13.314325   70686 cri.go:89] found id: ""
	I0127 11:46:13.314362   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.314372   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:13.314380   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:13.314441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:13.346889   70686 cri.go:89] found id: ""
	I0127 11:46:13.346918   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.346929   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:13.346936   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:13.346989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:13.378900   70686 cri.go:89] found id: ""
	I0127 11:46:13.378929   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.378939   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:13.378945   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:13.379004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:13.412919   70686 cri.go:89] found id: ""
	I0127 11:46:13.412952   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.412963   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:13.412971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:13.413027   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:13.444222   70686 cri.go:89] found id: ""
	I0127 11:46:13.444250   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.444260   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:13.444266   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:13.444317   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:13.474180   70686 cri.go:89] found id: ""
	I0127 11:46:13.474206   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.474212   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:13.474218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:13.474277   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:13.507679   70686 cri.go:89] found id: ""
	I0127 11:46:13.507707   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.507718   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:13.507726   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:13.507785   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:13.540402   70686 cri.go:89] found id: ""
	I0127 11:46:13.540428   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.540436   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:13.540444   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:13.540454   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:13.619310   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:13.619341   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:13.659541   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:13.659568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:13.710958   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:13.710992   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:13.724362   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:13.724387   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:13.799175   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:13.181930   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.681382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.080714   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:17.580030   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.766366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.265607   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:16.299872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:16.313092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:16.313151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:16.344606   70686 cri.go:89] found id: ""
	I0127 11:46:16.344636   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.344647   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:16.344654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:16.344709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:16.378025   70686 cri.go:89] found id: ""
	I0127 11:46:16.378052   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.378060   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:16.378065   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:16.378112   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:16.409333   70686 cri.go:89] found id: ""
	I0127 11:46:16.409359   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.409366   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:16.409372   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:16.409417   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:16.440176   70686 cri.go:89] found id: ""
	I0127 11:46:16.440199   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.440207   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:16.440218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:16.440303   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:16.474293   70686 cri.go:89] found id: ""
	I0127 11:46:16.474325   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.474333   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:16.474339   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:16.474386   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:16.505778   70686 cri.go:89] found id: ""
	I0127 11:46:16.505801   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.505808   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:16.505814   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:16.505867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:16.540769   70686 cri.go:89] found id: ""
	I0127 11:46:16.540797   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.540807   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:16.540815   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:16.540870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:16.576592   70686 cri.go:89] found id: ""
	I0127 11:46:16.576620   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.576630   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:16.576640   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:16.576652   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:16.653408   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:16.653443   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:16.692433   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:16.692458   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:16.740803   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:16.740837   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:16.753287   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:16.753312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:16.826095   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.327736   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:19.340166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:19.340220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:19.371540   70686 cri.go:89] found id: ""
	I0127 11:46:19.371578   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.371591   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:19.371600   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:19.371673   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:19.404729   70686 cri.go:89] found id: ""
	I0127 11:46:19.404764   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.404774   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:19.404781   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:19.404837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:19.439789   70686 cri.go:89] found id: ""
	I0127 11:46:19.439825   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.439837   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:19.439846   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:19.439906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:19.470570   70686 cri.go:89] found id: ""
	I0127 11:46:19.470600   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.470611   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:19.470619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:19.470681   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:19.501777   70686 cri.go:89] found id: ""
	I0127 11:46:19.501805   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.501816   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:19.501824   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:19.501880   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:19.534181   70686 cri.go:89] found id: ""
	I0127 11:46:19.534210   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.534217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:19.534223   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:19.534284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:19.566593   70686 cri.go:89] found id: ""
	I0127 11:46:19.566620   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.566628   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:19.566633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:19.566693   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:19.599915   70686 cri.go:89] found id: ""
	I0127 11:46:19.599940   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.599951   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:19.599966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:19.599981   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:19.650351   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:19.650385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:19.663542   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:19.663567   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:19.734523   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.734552   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:19.734568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:19.808148   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:19.808182   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:18.181077   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.181255   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:19.580896   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.079867   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.765484   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.345687   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:22.359497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:22.359568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:22.392346   70686 cri.go:89] found id: ""
	I0127 11:46:22.392372   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.392381   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:22.392386   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:22.392443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:22.425056   70686 cri.go:89] found id: ""
	I0127 11:46:22.425081   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.425089   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:22.425093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:22.425146   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:22.460472   70686 cri.go:89] found id: ""
	I0127 11:46:22.460501   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.460512   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:22.460519   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:22.460580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:22.494621   70686 cri.go:89] found id: ""
	I0127 11:46:22.494646   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.494656   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:22.494663   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:22.494724   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:22.531878   70686 cri.go:89] found id: ""
	I0127 11:46:22.531902   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.531909   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:22.531914   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:22.531961   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:22.566924   70686 cri.go:89] found id: ""
	I0127 11:46:22.566946   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.566953   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:22.566960   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:22.567019   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:22.601357   70686 cri.go:89] found id: ""
	I0127 11:46:22.601384   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.601394   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:22.601402   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:22.601467   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:22.634574   70686 cri.go:89] found id: ""
	I0127 11:46:22.634611   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.634620   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:22.634631   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:22.634641   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:22.683998   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:22.684027   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:22.697042   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:22.697068   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:22.758991   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.759018   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:22.759034   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:22.837791   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:22.837824   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.374998   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:25.387470   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:25.387527   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:25.419525   70686 cri.go:89] found id: ""
	I0127 11:46:25.419552   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.419559   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:25.419565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:25.419637   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:25.452027   70686 cri.go:89] found id: ""
	I0127 11:46:25.452051   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.452059   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:25.452064   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:25.452111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:25.482868   70686 cri.go:89] found id: ""
	I0127 11:46:25.482899   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.482909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:25.482916   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:25.482978   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:25.513413   70686 cri.go:89] found id: ""
	I0127 11:46:25.513438   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.513447   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:25.513453   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:25.513497   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:25.544499   70686 cri.go:89] found id: ""
	I0127 11:46:25.544525   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.544534   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:25.544545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:25.544591   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:25.576649   70686 cri.go:89] found id: ""
	I0127 11:46:25.576676   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.576686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:25.576694   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:25.576749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:25.613447   70686 cri.go:89] found id: ""
	I0127 11:46:25.613476   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.613483   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:25.613489   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:25.613547   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:25.645468   70686 cri.go:89] found id: ""
	I0127 11:46:25.645492   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.645503   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:25.645513   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:25.645530   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:25.724060   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:25.724112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.758966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:25.759001   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:25.809187   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:25.809218   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:25.822532   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:25.822563   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:25.889713   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.682762   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.180989   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:24.580025   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.079771   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.265011   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.265712   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:28.390290   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:28.402720   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:28.402794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:28.433933   70686 cri.go:89] found id: ""
	I0127 11:46:28.433960   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.433971   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:28.433979   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:28.434037   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:28.465830   70686 cri.go:89] found id: ""
	I0127 11:46:28.465864   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.465874   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:28.465881   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:28.465939   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:28.497527   70686 cri.go:89] found id: ""
	I0127 11:46:28.497562   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.497570   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:28.497579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:28.497645   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:28.531270   70686 cri.go:89] found id: ""
	I0127 11:46:28.531299   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.531308   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:28.531316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:28.531371   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:28.563348   70686 cri.go:89] found id: ""
	I0127 11:46:28.563369   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.563376   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:28.563381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:28.563426   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:28.596997   70686 cri.go:89] found id: ""
	I0127 11:46:28.597020   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.597027   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:28.597032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:28.597078   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:28.631710   70686 cri.go:89] found id: ""
	I0127 11:46:28.631744   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.631756   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:28.631763   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:28.631822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:28.691511   70686 cri.go:89] found id: ""
	I0127 11:46:28.691543   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.691554   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:28.691565   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:28.691579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:28.742602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:28.742635   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:28.756184   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:28.756207   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:28.830835   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:28.830857   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:28.830868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:28.905594   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:28.905630   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:27.181377   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.682869   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.580416   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.080512   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.765386   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.766041   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.441466   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:31.453810   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:31.453884   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:31.486385   70686 cri.go:89] found id: ""
	I0127 11:46:31.486419   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.486428   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:31.486433   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:31.486486   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:31.518387   70686 cri.go:89] found id: ""
	I0127 11:46:31.518414   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.518422   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:31.518427   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:31.518487   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:31.553495   70686 cri.go:89] found id: ""
	I0127 11:46:31.553519   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.553527   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:31.553532   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:31.553585   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:31.587152   70686 cri.go:89] found id: ""
	I0127 11:46:31.587178   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.587187   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:31.587194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:31.587249   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:31.617431   70686 cri.go:89] found id: ""
	I0127 11:46:31.617459   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.617468   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:31.617474   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:31.617519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:31.651686   70686 cri.go:89] found id: ""
	I0127 11:46:31.651712   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.651720   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:31.651725   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:31.651771   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:31.684941   70686 cri.go:89] found id: ""
	I0127 11:46:31.684967   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.684977   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:31.684984   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:31.685042   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:31.718413   70686 cri.go:89] found id: ""
	I0127 11:46:31.718440   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.718451   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:31.718461   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:31.718476   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:31.767445   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:31.767470   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:31.780922   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:31.780949   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:31.846438   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:31.846462   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:31.846474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:31.926888   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:31.926923   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.465125   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:34.479852   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:34.479930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:34.511060   70686 cri.go:89] found id: ""
	I0127 11:46:34.511084   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.511093   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:34.511098   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:34.511143   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:34.544234   70686 cri.go:89] found id: ""
	I0127 11:46:34.544263   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.544269   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:34.544275   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:34.544319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:34.578776   70686 cri.go:89] found id: ""
	I0127 11:46:34.578799   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.578809   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:34.578816   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:34.578871   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:34.611130   70686 cri.go:89] found id: ""
	I0127 11:46:34.611154   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.611163   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:34.611168   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:34.611225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:34.643126   70686 cri.go:89] found id: ""
	I0127 11:46:34.643153   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.643163   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:34.643171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:34.643227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:34.678033   70686 cri.go:89] found id: ""
	I0127 11:46:34.678076   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.678087   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:34.678094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:34.678160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:34.712414   70686 cri.go:89] found id: ""
	I0127 11:46:34.712443   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.712454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:34.712461   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:34.712534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:34.745083   70686 cri.go:89] found id: ""
	I0127 11:46:34.745109   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.745116   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:34.745124   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:34.745136   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:34.757666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:34.757694   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:34.823196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:34.823218   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:34.823230   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:34.905878   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:34.905913   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.942463   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:34.942488   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:32.181312   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.181612   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.579348   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.579626   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:33.766304   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.265533   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:37.493333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:37.505875   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:37.505935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:37.538445   70686 cri.go:89] found id: ""
	I0127 11:46:37.538470   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.538478   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:37.538484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:37.538537   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:37.569576   70686 cri.go:89] found id: ""
	I0127 11:46:37.569607   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.569618   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:37.569625   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:37.569687   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:37.603340   70686 cri.go:89] found id: ""
	I0127 11:46:37.603366   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.603376   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:37.603383   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:37.603441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:37.637178   70686 cri.go:89] found id: ""
	I0127 11:46:37.637211   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.637221   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:37.637230   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:37.637294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:37.669332   70686 cri.go:89] found id: ""
	I0127 11:46:37.669359   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.669367   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:37.669373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:37.669420   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:37.701983   70686 cri.go:89] found id: ""
	I0127 11:46:37.702012   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.702021   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:37.702028   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:37.702089   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:37.734833   70686 cri.go:89] found id: ""
	I0127 11:46:37.734856   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.734865   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:37.734871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:37.734927   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:37.768113   70686 cri.go:89] found id: ""
	I0127 11:46:37.768141   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.768149   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:37.768157   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:37.768167   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:37.839883   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:37.839917   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:37.876177   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:37.876210   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:37.928640   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:37.928669   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:37.942971   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:37.942995   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:38.012611   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.514324   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:40.526994   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:40.527053   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:40.561170   70686 cri.go:89] found id: ""
	I0127 11:46:40.561192   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.561200   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:40.561205   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:40.561248   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:40.597933   70686 cri.go:89] found id: ""
	I0127 11:46:40.597964   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.597973   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:40.597981   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:40.598049   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:40.633227   70686 cri.go:89] found id: ""
	I0127 11:46:40.633255   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.633263   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:40.633287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:40.633348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:40.667332   70686 cri.go:89] found id: ""
	I0127 11:46:40.667360   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.667368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:40.667373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:40.667434   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:40.702346   70686 cri.go:89] found id: ""
	I0127 11:46:40.702372   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.702383   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:40.702391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:40.702447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:40.733890   70686 cri.go:89] found id: ""
	I0127 11:46:40.733916   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.733924   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:40.733929   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:40.733979   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:40.766986   70686 cri.go:89] found id: ""
	I0127 11:46:40.767005   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.767011   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:40.767016   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:40.767069   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:40.809290   70686 cri.go:89] found id: ""
	I0127 11:46:40.809320   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.809331   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:40.809342   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:40.809363   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:40.863970   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:40.864006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:40.886163   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:40.886188   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 11:46:36.181772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.181835   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.682630   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:39.080089   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:41.080522   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.766734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.264746   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	W0127 11:46:40.951248   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.951277   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:40.951293   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:41.025220   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:41.025251   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.562970   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:43.575475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:43.575540   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:43.614847   70686 cri.go:89] found id: ""
	I0127 11:46:43.614875   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.614885   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:43.614892   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:43.614957   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:43.651178   70686 cri.go:89] found id: ""
	I0127 11:46:43.651208   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.651219   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:43.651227   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:43.651282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:43.683752   70686 cri.go:89] found id: ""
	I0127 11:46:43.683777   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.683783   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:43.683788   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:43.683846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:43.718384   70686 cri.go:89] found id: ""
	I0127 11:46:43.718418   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.718429   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:43.718486   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:43.718557   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:43.751566   70686 cri.go:89] found id: ""
	I0127 11:46:43.751619   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.751631   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:43.751639   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:43.751701   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:43.785338   70686 cri.go:89] found id: ""
	I0127 11:46:43.785370   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.785381   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:43.785390   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:43.785453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:43.825291   70686 cri.go:89] found id: ""
	I0127 11:46:43.825320   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.825330   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:43.825337   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:43.825397   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:43.856396   70686 cri.go:89] found id: ""
	I0127 11:46:43.856422   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.856429   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:43.856437   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:43.856448   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:43.907954   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:43.907991   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:43.920963   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:43.920987   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:43.986527   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:43.986547   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:43.986562   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:44.062764   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:44.062796   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.181118   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.185722   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.080947   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.579654   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.265779   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:46.259360   69396 pod_ready.go:82] duration metric: took 4m0.000152356s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:46.259407   69396 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:46.259422   69396 pod_ready.go:39] duration metric: took 4m14.538674469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:46.259449   69396 kubeadm.go:597] duration metric: took 4m21.955300548s to restartPrimaryControlPlane
	W0127 11:46:46.259525   69396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:46.259559   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:46:46.599548   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:46.625909   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:46.625985   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:46.670285   70686 cri.go:89] found id: ""
	I0127 11:46:46.670317   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.670329   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:46.670337   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:46.670408   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:46.703591   70686 cri.go:89] found id: ""
	I0127 11:46:46.703628   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.703636   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:46.703642   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:46.703689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:46.734451   70686 cri.go:89] found id: ""
	I0127 11:46:46.734475   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.734482   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:46.734487   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:46.734539   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:46.768854   70686 cri.go:89] found id: ""
	I0127 11:46:46.768879   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.768886   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:46.768891   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:46.768937   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:46.798912   70686 cri.go:89] found id: ""
	I0127 11:46:46.798937   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.798945   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:46.798951   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:46.799009   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:46.832665   70686 cri.go:89] found id: ""
	I0127 11:46:46.832689   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.832696   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:46.832702   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:46.832751   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:46.863964   70686 cri.go:89] found id: ""
	I0127 11:46:46.863990   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.863998   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:46.864003   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:46.864064   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:46.902558   70686 cri.go:89] found id: ""
	I0127 11:46:46.902595   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.902606   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:46.902617   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:46.902632   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:46.937731   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:46.937754   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:46.986804   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:46.986839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:47.000095   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:47.000142   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:47.064072   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:47.064099   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:47.064118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:49.640691   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:49.653166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:49.653225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:49.687904   70686 cri.go:89] found id: ""
	I0127 11:46:49.687928   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.687938   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:49.687945   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:49.688000   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:49.725500   70686 cri.go:89] found id: ""
	I0127 11:46:49.725528   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.725537   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:49.725549   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:49.725610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:49.757793   70686 cri.go:89] found id: ""
	I0127 11:46:49.757823   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.757834   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:49.757841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:49.757901   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:49.789916   70686 cri.go:89] found id: ""
	I0127 11:46:49.789945   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.789955   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:49.789962   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:49.790020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:49.821431   70686 cri.go:89] found id: ""
	I0127 11:46:49.821461   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.821472   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:49.821479   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:49.821541   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:49.853511   70686 cri.go:89] found id: ""
	I0127 11:46:49.853541   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.853548   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:49.853554   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:49.853605   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:49.887197   70686 cri.go:89] found id: ""
	I0127 11:46:49.887225   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.887232   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:49.887237   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:49.887313   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:49.920423   70686 cri.go:89] found id: ""
	I0127 11:46:49.920454   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.920465   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:49.920476   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:49.920489   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:49.970455   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:49.970487   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:49.985812   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:49.985844   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:50.055494   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:50.055520   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:50.055536   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:50.134706   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:50.134743   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:47.682388   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.180618   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:48.080040   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.580505   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.580590   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.675280   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:52.690464   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:52.690545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:52.722566   70686 cri.go:89] found id: ""
	I0127 11:46:52.722600   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.722611   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:52.722621   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:52.722683   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:52.754684   70686 cri.go:89] found id: ""
	I0127 11:46:52.754710   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.754718   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:52.754723   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:52.754782   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:52.786631   70686 cri.go:89] found id: ""
	I0127 11:46:52.786659   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.786685   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:52.786691   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:52.786745   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:52.817637   70686 cri.go:89] found id: ""
	I0127 11:46:52.817664   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.817672   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:52.817681   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:52.817737   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:52.853402   70686 cri.go:89] found id: ""
	I0127 11:46:52.853428   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.853437   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:52.853443   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:52.853504   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:52.893692   70686 cri.go:89] found id: ""
	I0127 11:46:52.893720   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.893727   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:52.893733   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:52.893780   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.924897   70686 cri.go:89] found id: ""
	I0127 11:46:52.924926   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.924934   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:52.924940   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:52.924988   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:52.955377   70686 cri.go:89] found id: ""
	I0127 11:46:52.955397   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.955404   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:52.955412   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:52.955422   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:53.007489   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:53.007518   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:53.020482   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:53.020508   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:53.088456   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:53.088489   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:53.088503   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:53.161401   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:53.161432   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:55.698676   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:55.711047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:55.711104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:55.741929   70686 cri.go:89] found id: ""
	I0127 11:46:55.741952   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.741960   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:55.741965   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:55.742016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:55.773353   70686 cri.go:89] found id: ""
	I0127 11:46:55.773385   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.773394   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:55.773399   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:55.773453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:55.805262   70686 cri.go:89] found id: ""
	I0127 11:46:55.805293   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.805303   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:55.805309   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:55.805356   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:55.837444   70686 cri.go:89] found id: ""
	I0127 11:46:55.837469   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.837477   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:55.837483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:55.837554   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:55.870483   70686 cri.go:89] found id: ""
	I0127 11:46:55.870519   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.870533   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:55.870541   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:55.870603   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:55.902327   70686 cri.go:89] found id: ""
	I0127 11:46:55.902364   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.902374   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:55.902381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:55.902448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.182237   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:54.680772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:55.079977   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:56.573914   69688 pod_ready.go:82] duration metric: took 4m0.000313005s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:56.573939   69688 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:56.573958   69688 pod_ready.go:39] duration metric: took 4m9.537234596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:56.573984   69688 kubeadm.go:597] duration metric: took 4m17.786447343s to restartPrimaryControlPlane
	W0127 11:46:56.574055   69688 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:56.574078   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:46:55.936231   70686 cri.go:89] found id: ""
	I0127 11:46:55.936269   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.936279   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:55.936287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:55.936369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:55.968008   70686 cri.go:89] found id: ""
	I0127 11:46:55.968032   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.968039   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:55.968047   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:55.968057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:56.018736   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:56.018766   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:56.031397   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:56.031423   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:56.097044   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:56.097066   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:56.097079   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:56.171821   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:56.171855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:58.715327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:58.728027   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:58.728087   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:58.758672   70686 cri.go:89] found id: ""
	I0127 11:46:58.758700   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.758712   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:58.758719   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:58.758786   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:58.790220   70686 cri.go:89] found id: ""
	I0127 11:46:58.790245   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.790255   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:58.790263   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:58.790327   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:58.822188   70686 cri.go:89] found id: ""
	I0127 11:46:58.822214   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.822221   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:58.822227   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:58.822273   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:58.863053   70686 cri.go:89] found id: ""
	I0127 11:46:58.863089   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.863096   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:58.863102   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:58.863156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:58.899216   70686 cri.go:89] found id: ""
	I0127 11:46:58.899259   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.899271   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:58.899279   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:58.899338   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:58.935392   70686 cri.go:89] found id: ""
	I0127 11:46:58.935425   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.935435   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:58.935441   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:58.935503   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:58.972729   70686 cri.go:89] found id: ""
	I0127 11:46:58.972759   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.972767   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:58.972772   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:58.972823   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:59.008660   70686 cri.go:89] found id: ""
	I0127 11:46:59.008689   70686 logs.go:282] 0 containers: []
	W0127 11:46:59.008698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:59.008707   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:59.008718   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:59.063158   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:59.063199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:59.075767   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:59.075799   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:59.142382   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:59.142406   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:59.142421   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:59.223068   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:59.223100   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:56.683260   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:59.183917   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:01.760319   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:01.774202   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:01.774282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:01.817355   70686 cri.go:89] found id: ""
	I0127 11:47:01.817389   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.817401   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:01.817408   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:01.817469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:01.862960   70686 cri.go:89] found id: ""
	I0127 11:47:01.862985   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.862996   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:01.863003   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:01.863065   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:01.899900   70686 cri.go:89] found id: ""
	I0127 11:47:01.899931   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.899942   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:01.899949   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:01.900014   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:01.934687   70686 cri.go:89] found id: ""
	I0127 11:47:01.934723   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.934735   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:01.934744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:01.934809   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:01.969463   70686 cri.go:89] found id: ""
	I0127 11:47:01.969490   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.969501   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:01.969507   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:01.969578   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:02.000732   70686 cri.go:89] found id: ""
	I0127 11:47:02.000762   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.000772   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:02.000779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:02.000837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:02.035717   70686 cri.go:89] found id: ""
	I0127 11:47:02.035740   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.035748   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:02.035755   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:02.035799   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:02.073457   70686 cri.go:89] found id: ""
	I0127 11:47:02.073488   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.073498   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:02.073506   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:02.073519   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:02.142775   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:02.142800   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:02.142819   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:02.224541   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:02.224579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:02.260807   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:02.260840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:02.315983   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:02.316017   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:04.830232   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:04.844321   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:04.844380   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:04.880946   70686 cri.go:89] found id: ""
	I0127 11:47:04.880977   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.880986   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:04.880991   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:04.881066   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:04.913741   70686 cri.go:89] found id: ""
	I0127 11:47:04.913766   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.913773   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:04.913778   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:04.913831   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:04.948526   70686 cri.go:89] found id: ""
	I0127 11:47:04.948558   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.948565   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:04.948571   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:04.948621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:04.982076   70686 cri.go:89] found id: ""
	I0127 11:47:04.982102   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.982112   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:04.982119   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:04.982181   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:05.014982   70686 cri.go:89] found id: ""
	I0127 11:47:05.015007   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.015018   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:05.015025   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:05.015111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:05.048025   70686 cri.go:89] found id: ""
	I0127 11:47:05.048054   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.048065   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:05.048073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:05.048132   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:05.078464   70686 cri.go:89] found id: ""
	I0127 11:47:05.078492   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.078502   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:05.078509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:05.078584   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:05.109525   70686 cri.go:89] found id: ""
	I0127 11:47:05.109560   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.109571   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:05.109581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:05.109595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:05.157576   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:05.157608   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:05.170049   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:05.170087   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:05.239411   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:05.239433   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:05.239447   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:05.318700   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:05.318742   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:01.682086   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:04.182095   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:07.856193   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:07.870239   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:07.870310   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:07.910104   70686 cri.go:89] found id: ""
	I0127 11:47:07.910130   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.910138   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:07.910144   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:07.910189   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:07.945048   70686 cri.go:89] found id: ""
	I0127 11:47:07.945074   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.945084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:07.945092   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:07.945166   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:07.976080   70686 cri.go:89] found id: ""
	I0127 11:47:07.976111   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.976122   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:07.976128   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:07.976200   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:08.013354   70686 cri.go:89] found id: ""
	I0127 11:47:08.013388   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.013400   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:08.013407   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:08.013465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:08.045589   70686 cri.go:89] found id: ""
	I0127 11:47:08.045618   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.045626   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:08.045631   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:08.045689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:08.079539   70686 cri.go:89] found id: ""
	I0127 11:47:08.079565   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.079573   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:08.079579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:08.079650   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:08.110343   70686 cri.go:89] found id: ""
	I0127 11:47:08.110375   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.110383   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:08.110388   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:08.110447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:08.140367   70686 cri.go:89] found id: ""
	I0127 11:47:08.140398   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.140411   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:08.140422   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:08.140436   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:08.205212   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:08.205240   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:08.205255   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:08.277925   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:08.277956   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:08.314583   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:08.314609   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:08.362779   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:08.362809   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:10.876637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:10.890367   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:10.890448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:10.925658   70686 cri.go:89] found id: ""
	I0127 11:47:10.925688   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.925699   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:10.925707   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:10.925763   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:06.681477   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:08.681667   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.916547   69396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.656958711s)
	I0127 11:47:13.916611   69396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:13.933947   69396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:13.945813   69396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:13.956760   69396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:13.956784   69396 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:13.956829   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:13.967874   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:13.967928   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:13.978307   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:13.988624   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:13.988681   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:14.000424   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.012062   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:14.012123   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.021263   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:14.031880   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:14.031940   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:14.043324   69396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:14.085914   69396 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:14.085997   69396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:14.183080   69396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:14.183249   69396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:14.183394   69396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:14.195440   69396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:14.197259   69396 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:14.197356   69396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:14.197854   69396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:14.198266   69396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:14.198428   69396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:14.198787   69396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:14.200947   69396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:14.201202   69396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:14.201438   69396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:14.201742   69396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:14.201820   69396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:14.201962   69396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:14.202056   69396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:14.393335   69396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:14.578877   69396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:14.683103   69396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:14.892112   69396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:15.059210   69396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:15.059802   69396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:15.062493   69396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:10.957444   70686 cri.go:89] found id: ""
	I0127 11:47:10.957478   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.957490   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:10.957498   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:10.957561   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:10.988373   70686 cri.go:89] found id: ""
	I0127 11:47:10.988401   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.988412   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:10.988419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:10.988483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:11.019641   70686 cri.go:89] found id: ""
	I0127 11:47:11.019672   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.019683   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:11.019690   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:11.019747   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:11.051614   70686 cri.go:89] found id: ""
	I0127 11:47:11.051643   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.051654   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:11.051661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:11.051709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:11.083356   70686 cri.go:89] found id: ""
	I0127 11:47:11.083386   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.083396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:11.083404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:11.083464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:11.115324   70686 cri.go:89] found id: ""
	I0127 11:47:11.115359   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.115370   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:11.115378   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:11.115451   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:11.150953   70686 cri.go:89] found id: ""
	I0127 11:47:11.150983   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.150994   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:11.151005   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:11.151018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:11.199824   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:11.199855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:11.212841   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:11.212906   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:11.278680   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:11.278707   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:11.278726   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:11.356679   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:11.356719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:13.900662   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:13.913787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:13.913849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:13.947893   70686 cri.go:89] found id: ""
	I0127 11:47:13.947922   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.947934   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:13.947943   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:13.948001   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:13.983161   70686 cri.go:89] found id: ""
	I0127 11:47:13.983190   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.983201   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:13.983209   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:13.983264   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:14.022256   70686 cri.go:89] found id: ""
	I0127 11:47:14.022284   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.022295   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:14.022303   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:14.022354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:14.056796   70686 cri.go:89] found id: ""
	I0127 11:47:14.056830   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.056841   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:14.056848   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:14.056907   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:14.094914   70686 cri.go:89] found id: ""
	I0127 11:47:14.094941   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.094948   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:14.094954   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:14.095011   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:14.133436   70686 cri.go:89] found id: ""
	I0127 11:47:14.133463   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.133471   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:14.133477   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:14.133542   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:14.169031   70686 cri.go:89] found id: ""
	I0127 11:47:14.169062   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.169072   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:14.169078   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:14.169125   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:14.212411   70686 cri.go:89] found id: ""
	I0127 11:47:14.212435   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.212443   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:14.212452   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:14.212463   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:14.262867   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:14.262898   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:14.275105   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:14.275131   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:14.341159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:14.341190   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:14.341208   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:14.415317   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:14.415367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:11.180827   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.681189   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.682069   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.064304   69396 out.go:235]   - Booting up control plane ...
	I0127 11:47:15.064419   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:15.064539   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:15.064632   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:15.081619   69396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:15.087804   69396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:15.087864   69396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:15.215883   69396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:15.216024   69396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:15.717623   69396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.507256ms
	I0127 11:47:15.717711   69396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:20.718798   69396 kubeadm.go:310] [api-check] The API server is healthy after 5.001299318s
	I0127 11:47:20.735824   69396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:20.751647   69396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:20.776203   69396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:20.776453   69396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-273200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:20.786999   69396 kubeadm.go:310] [bootstrap-token] Using token: tjwk8y.hsba31n3brg7yicx
	I0127 11:47:16.953543   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:16.966233   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:16.966320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:17.006909   70686 cri.go:89] found id: ""
	I0127 11:47:17.006936   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.006946   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:17.006953   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:17.007008   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:17.041632   70686 cri.go:89] found id: ""
	I0127 11:47:17.041659   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.041669   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:17.041677   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:17.041731   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:17.076772   70686 cri.go:89] found id: ""
	I0127 11:47:17.076801   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.076811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:17.076818   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:17.076870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:17.112391   70686 cri.go:89] found id: ""
	I0127 11:47:17.112422   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.112433   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:17.112440   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:17.112573   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:17.148197   70686 cri.go:89] found id: ""
	I0127 11:47:17.148229   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.148247   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:17.148255   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:17.148320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:17.186840   70686 cri.go:89] found id: ""
	I0127 11:47:17.186871   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.186882   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:17.186895   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:17.186953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:17.219412   70686 cri.go:89] found id: ""
	I0127 11:47:17.219443   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.219454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:17.219463   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:17.219534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:17.256447   70686 cri.go:89] found id: ""
	I0127 11:47:17.256478   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.256488   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:17.256499   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:17.256512   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.293919   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:17.293955   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:17.342997   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:17.343028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:17.356650   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:17.356679   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:17.425809   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:17.425838   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:17.425852   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.017327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:20.034172   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:20.034239   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:20.071873   70686 cri.go:89] found id: ""
	I0127 11:47:20.071895   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.071903   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:20.071908   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:20.071955   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:20.106387   70686 cri.go:89] found id: ""
	I0127 11:47:20.106410   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.106417   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:20.106422   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:20.106481   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:20.141095   70686 cri.go:89] found id: ""
	I0127 11:47:20.141130   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.141138   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:20.141144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:20.141194   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:20.183275   70686 cri.go:89] found id: ""
	I0127 11:47:20.183302   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.183310   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:20.183316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:20.183373   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:20.217954   70686 cri.go:89] found id: ""
	I0127 11:47:20.217981   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.217991   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:20.217999   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:20.218061   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:20.262572   70686 cri.go:89] found id: ""
	I0127 11:47:20.262604   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.262616   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:20.262623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:20.262677   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:20.297951   70686 cri.go:89] found id: ""
	I0127 11:47:20.297982   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.297993   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:20.298000   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:20.298088   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:20.331854   70686 cri.go:89] found id: ""
	I0127 11:47:20.331891   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.331901   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:20.331913   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:20.331930   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:20.387238   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:20.387274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:20.409789   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:20.409823   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:20.487425   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:20.487451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:20.487464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.563923   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:20.563959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.682390   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.182895   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.788426   69396 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:20.788582   69396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:20.793089   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:20.803401   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:20.812287   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:20.816685   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:20.822172   69396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:21.128937   69396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:21.553347   69396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:22.127179   69396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:22.127210   69396 kubeadm.go:310] 
	I0127 11:47:22.127314   69396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:22.127342   69396 kubeadm.go:310] 
	I0127 11:47:22.127419   69396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:22.127428   69396 kubeadm.go:310] 
	I0127 11:47:22.127467   69396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:22.127532   69396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:22.127584   69396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:22.127594   69396 kubeadm.go:310] 
	I0127 11:47:22.127682   69396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:22.127691   69396 kubeadm.go:310] 
	I0127 11:47:22.127757   69396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:22.127768   69396 kubeadm.go:310] 
	I0127 11:47:22.127848   69396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:22.127969   69396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:22.128089   69396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:22.128103   69396 kubeadm.go:310] 
	I0127 11:47:22.128204   69396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:22.128331   69396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:22.128350   69396 kubeadm.go:310] 
	I0127 11:47:22.128485   69396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.128622   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:22.128658   69396 kubeadm.go:310] 	--control-plane 
	I0127 11:47:22.128669   69396 kubeadm.go:310] 
	I0127 11:47:22.128793   69396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:22.128805   69396 kubeadm.go:310] 
	I0127 11:47:22.128921   69396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.129015   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:22.129734   69396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:22.129770   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:47:22.129781   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:22.131454   69396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:22.132751   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:22.143934   69396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:22.162031   69396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:22.162109   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.162131   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-273200 minikube.k8s.io/updated_at=2025_01_27T11_47_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-273200 minikube.k8s.io/primary=true
	I0127 11:47:22.357159   69396 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:22.357255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.858227   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.101745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:23.115010   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:23.115068   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:23.153195   70686 cri.go:89] found id: ""
	I0127 11:47:23.153223   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.153236   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:23.153244   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:23.153311   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:23.187393   70686 cri.go:89] found id: ""
	I0127 11:47:23.187420   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.187431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:23.187437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:23.187499   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:23.220850   70686 cri.go:89] found id: ""
	I0127 11:47:23.220879   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.220888   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:23.220896   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:23.220953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:23.256597   70686 cri.go:89] found id: ""
	I0127 11:47:23.256626   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.256636   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:23.256644   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:23.256692   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:23.296324   70686 cri.go:89] found id: ""
	I0127 11:47:23.296356   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.296366   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:23.296373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:23.296436   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:23.335645   70686 cri.go:89] found id: ""
	I0127 11:47:23.335672   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.335681   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:23.335687   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:23.335733   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:23.366972   70686 cri.go:89] found id: ""
	I0127 11:47:23.366995   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.367003   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:23.367008   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:23.367062   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:23.405377   70686 cri.go:89] found id: ""
	I0127 11:47:23.405404   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.405412   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:23.405420   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:23.405433   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:23.473871   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:23.473898   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:23.473918   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:23.548827   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:23.548868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:23.584272   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:23.584302   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:23.645470   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:23.645517   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:22.681079   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:24.681767   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:23.357378   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.858261   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.358001   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.858052   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.358029   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.858255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.357827   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.545723   69396 kubeadm.go:1113] duration metric: took 4.38367816s to wait for elevateKubeSystemPrivileges
	I0127 11:47:26.545828   69396 kubeadm.go:394] duration metric: took 5m2.297374967s to StartCluster
	I0127 11:47:26.545882   69396 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.545994   69396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:26.548122   69396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.548782   69396 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:26.548545   69396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:26.548897   69396 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:26.549176   69396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-273200"
	I0127 11:47:26.549197   69396 addons.go:238] Setting addon storage-provisioner=true in "no-preload-273200"
	W0127 11:47:26.549209   69396 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:47:26.549239   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.549690   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.549730   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.549955   69396 addons.go:69] Setting default-storageclass=true in profile "no-preload-273200"
	I0127 11:47:26.549974   69396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-273200"
	I0127 11:47:26.550340   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.550368   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.550531   69396 addons.go:69] Setting metrics-server=true in profile "no-preload-273200"
	I0127 11:47:26.550551   69396 addons.go:238] Setting addon metrics-server=true in "no-preload-273200"
	W0127 11:47:26.550559   69396 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:26.550590   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550587   69396 addons.go:69] Setting dashboard=true in profile "no-preload-273200"
	I0127 11:47:26.550619   69396 addons.go:238] Setting addon dashboard=true in "no-preload-273200"
	W0127 11:47:26.550629   69396 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:26.550671   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550795   69396 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:26.550980   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551018   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.551086   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551125   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.552072   69396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:26.591135   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0127 11:47:26.591160   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0127 11:47:26.591337   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0127 11:47:26.591436   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0127 11:47:26.591962   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.591974   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592254   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592532   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592551   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592661   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592682   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592699   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592683   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.593029   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593065   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593226   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.593239   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593679   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593720   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.593787   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593821   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.596147   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.600142   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.600157   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.602457   69396 addons.go:238] Setting addon default-storageclass=true in "no-preload-273200"
	W0127 11:47:26.602479   69396 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:26.602510   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.602874   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.602914   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.604120   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.608202   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.608245   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.617629   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0127 11:47:26.618396   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.618963   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.618984   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.619363   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.619536   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.621603   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.623294   69396 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:26.625658   69396 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:26.626912   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:26.626933   69396 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:26.626955   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.630583   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0127 11:47:26.630587   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 11:47:26.631073   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.631690   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.631710   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.631883   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.632167   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.632324   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.632658   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.632673   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.633439   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.633559   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.633993   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.634505   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.634533   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.634773   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.634922   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.635051   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.635188   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.636019   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.636059   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.642473   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 11:47:26.645166   69396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:26.646249   69396 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:26.646264   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:26.646281   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.651734   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.651803   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.651826   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.651843   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.652136   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.659702   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.659915   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.663957   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0127 11:47:26.664289   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665037   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665168   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665183   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665558   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.665749   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665761   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665970   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.666585   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.666886   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.667729   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669615   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669619   69396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:24.171505   69688 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.597391159s)
	I0127 11:47:24.171597   69688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:24.187337   69688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:24.197062   69688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:24.208102   69688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:24.208127   69688 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:24.208176   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:24.223247   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:24.223306   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:24.232903   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:24.241163   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:24.241220   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:24.251669   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.260475   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:24.260534   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.269272   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:24.277509   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:24.277554   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:24.286253   69688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:24.435312   69688 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:26.669962   69396 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:26.669979   69396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:26.669998   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.670903   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:26.670919   69396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:26.670935   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.675429   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678600   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678659   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678709   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678726   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678749   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678771   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678781   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678803   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.678993   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.679036   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679128   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679182   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.679386   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.875833   69396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:26.920571   69396 node_ready.go:35] waiting up to 6m0s for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939903   69396 node_ready.go:49] node "no-preload-273200" has status "Ready":"True"
	I0127 11:47:26.939926   69396 node_ready.go:38] duration metric: took 19.319573ms for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939937   69396 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:26.959191   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:27.008467   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:27.081273   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:27.081304   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:27.101527   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:27.152011   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:27.152043   69396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:27.244718   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:27.244747   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:27.252472   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.252495   69396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:27.296605   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.313892   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:27.313920   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:27.403990   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:27.404022   69396 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:27.477781   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:27.477811   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:27.571056   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:27.571086   69396 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:27.705284   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:27.705316   69396 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:27.789319   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:27.789349   69396 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:27.870737   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:27.870774   69396 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:27.935415   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:27.935444   69396 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:27.990927   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:28.098209   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089707756s)
	I0127 11:47:28.098259   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098271   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098370   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098402   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098565   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098581   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098609   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098618   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098707   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098721   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098730   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098738   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098839   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.098925   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098945   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.099049   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.099059   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.099062   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.114073   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.114099   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.114382   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.114404   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.614645   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.317992457s)
	I0127 11:47:28.614719   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.614737   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.615709   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.615736   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.615759   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.615779   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.615792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.617426   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.617436   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.617454   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.617473   69396 addons.go:479] Verifying addon metrics-server=true in "no-preload-273200"
	I0127 11:47:28.972192   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.485321   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.494345914s)
	I0127 11:47:29.485395   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485413   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.485754   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.485774   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.485784   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.486141   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:29.486164   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.486172   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.487790   69396 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-273200 addons enable metrics-server
	
	I0127 11:47:29.489175   69396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:26.161139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:26.175269   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:26.175344   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:26.213990   70686 cri.go:89] found id: ""
	I0127 11:47:26.214019   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.214030   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:26.214038   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:26.214099   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:26.250643   70686 cri.go:89] found id: ""
	I0127 11:47:26.250672   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.250680   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:26.250685   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:26.250749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:26.289305   70686 cri.go:89] found id: ""
	I0127 11:47:26.289327   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.289336   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:26.289343   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:26.289400   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:26.327511   70686 cri.go:89] found id: ""
	I0127 11:47:26.327546   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.327557   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:26.327564   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:26.327629   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:26.363961   70686 cri.go:89] found id: ""
	I0127 11:47:26.363996   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.364011   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:26.364019   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:26.364076   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:26.403759   70686 cri.go:89] found id: ""
	I0127 11:47:26.403782   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.403793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:26.403801   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:26.403862   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:26.443391   70686 cri.go:89] found id: ""
	I0127 11:47:26.443419   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.443429   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:26.443436   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:26.443496   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:26.486086   70686 cri.go:89] found id: ""
	I0127 11:47:26.486189   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.486219   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:26.486255   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:26.486290   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:26.537761   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:26.537789   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:26.624695   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:26.624728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:26.644616   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:26.644646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:26.732815   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:26.732835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:26.732846   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:29.315744   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:29.331345   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:29.331421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:29.366233   70686 cri.go:89] found id: ""
	I0127 11:47:29.366264   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.366276   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:29.366283   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:29.366355   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:29.402282   70686 cri.go:89] found id: ""
	I0127 11:47:29.402310   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.402320   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:29.402327   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:29.402389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:29.438381   70686 cri.go:89] found id: ""
	I0127 11:47:29.438409   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.438420   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:29.438429   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:29.438483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:29.473386   70686 cri.go:89] found id: ""
	I0127 11:47:29.473408   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.473414   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:29.473419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:29.473465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:29.506930   70686 cri.go:89] found id: ""
	I0127 11:47:29.506954   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.506961   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:29.506966   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:29.507025   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:29.542763   70686 cri.go:89] found id: ""
	I0127 11:47:29.542786   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.542794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:29.542802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:29.542861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:29.578067   70686 cri.go:89] found id: ""
	I0127 11:47:29.578097   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.578108   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:29.578117   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:29.578176   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:29.613659   70686 cri.go:89] found id: ""
	I0127 11:47:29.613687   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.613698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:29.613709   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:29.613728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:29.659409   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:29.659446   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:29.718837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:29.718870   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:29.735558   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:29.735583   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:29.839999   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:29.840025   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:29.840043   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:26.683550   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.183056   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:32.285356   69688 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:32.285447   69688 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:32.285583   69688 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:32.285722   69688 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:32.285858   69688 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:32.285955   69688 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:32.287165   69688 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:32.287240   69688 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:32.287301   69688 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:32.287411   69688 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:32.287505   69688 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:32.287574   69688 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:32.287659   69688 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:32.287773   69688 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:32.287869   69688 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:32.287947   69688 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:32.288020   69688 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:32.288054   69688 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:32.288102   69688 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:32.288149   69688 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:32.288202   69688 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:32.288265   69688 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:32.288341   69688 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:32.288412   69688 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:32.288506   69688 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:32.288612   69688 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:32.290658   69688 out.go:235]   - Booting up control plane ...
	I0127 11:47:32.290754   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:32.290861   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:32.290938   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:32.291060   69688 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:32.291188   69688 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:32.291240   69688 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:32.291426   69688 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:32.291585   69688 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:32.291703   69688 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.921879ms
	I0127 11:47:32.291805   69688 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:32.291896   69688 kubeadm.go:310] [api-check] The API server is healthy after 5.007975802s
	I0127 11:47:32.292039   69688 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:32.292235   69688 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:32.292322   69688 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:32.292582   69688 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-986409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:32.292672   69688 kubeadm.go:310] [bootstrap-token] Using token: qkdn31.mmb2k0rafw3oyd5r
	I0127 11:47:32.293870   69688 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:32.294001   69688 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:32.294069   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:32.294179   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:32.294287   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:32.294412   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:32.294512   69688 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:32.294620   69688 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:32.294658   69688 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:32.294697   69688 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:32.294704   69688 kubeadm.go:310] 
	I0127 11:47:32.294752   69688 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:32.294759   69688 kubeadm.go:310] 
	I0127 11:47:32.294824   69688 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:32.294834   69688 kubeadm.go:310] 
	I0127 11:47:32.294869   69688 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:32.294927   69688 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:32.294970   69688 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:32.294976   69688 kubeadm.go:310] 
	I0127 11:47:32.295034   69688 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:32.295040   69688 kubeadm.go:310] 
	I0127 11:47:32.295078   69688 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:32.295084   69688 kubeadm.go:310] 
	I0127 11:47:32.295129   69688 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:32.295218   69688 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:32.295321   69688 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:32.295333   69688 kubeadm.go:310] 
	I0127 11:47:32.295447   69688 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:32.295574   69688 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:32.295586   69688 kubeadm.go:310] 
	I0127 11:47:32.295723   69688 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.295861   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:32.295885   69688 kubeadm.go:310] 	--control-plane 
	I0127 11:47:32.295888   69688 kubeadm.go:310] 
	I0127 11:47:32.295957   69688 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:32.295963   69688 kubeadm.go:310] 
	I0127 11:47:32.296089   69688 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.296217   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:32.296242   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:47:32.296252   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:32.297821   69688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:32.299024   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:32.311774   69688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:32.333154   69688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:32.333250   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:32.333317   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-986409 minikube.k8s.io/updated_at=2025_01_27T11_47_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=embed-certs-986409 minikube.k8s.io/primary=true
	I0127 11:47:32.373901   69688 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:32.614706   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:29.490582   69396 addons.go:514] duration metric: took 2.941688444s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:31.467084   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.115242   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:33.614855   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.114947   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.615735   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.114787   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.615277   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.708075   69688 kubeadm.go:1113] duration metric: took 3.374895681s to wait for elevateKubeSystemPrivileges
	I0127 11:47:35.708110   69688 kubeadm.go:394] duration metric: took 4m56.964886498s to StartCluster
	I0127 11:47:35.708127   69688 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.708206   69688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:35.709765   69688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.710017   69688 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:35.710099   69688 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:35.710197   69688 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-986409"
	I0127 11:47:35.710208   69688 addons.go:69] Setting default-storageclass=true in profile "embed-certs-986409"
	I0127 11:47:35.710224   69688 addons.go:69] Setting dashboard=true in profile "embed-certs-986409"
	I0127 11:47:35.710231   69688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-986409"
	I0127 11:47:35.710234   69688 addons.go:238] Setting addon dashboard=true in "embed-certs-986409"
	I0127 11:47:35.710215   69688 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-986409"
	W0127 11:47:35.710294   69688 addons.go:247] addon storage-provisioner should already be in state true
	W0127 11:47:35.710246   69688 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:35.710361   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.710231   69688 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:35.710232   69688 addons.go:69] Setting metrics-server=true in profile "embed-certs-986409"
	I0127 11:47:35.710835   69688 addons.go:238] Setting addon metrics-server=true in "embed-certs-986409"
	W0127 11:47:35.710848   69688 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:35.710878   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.711284   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711319   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711356   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711379   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711948   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.712418   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.712548   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.713403   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.713472   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.719688   69688 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:35.721496   69688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:35.730986   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
	I0127 11:47:35.731485   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.731589   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0127 11:47:35.731973   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.731990   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732030   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732378   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.732610   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32785
	I0127 11:47:35.732868   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.732886   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732943   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732985   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733025   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733170   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.733387   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.733408   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.733574   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733609   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733744   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.734292   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.734315   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.739242   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0127 11:47:35.739695   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.740240   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.740254   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.740603   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.740797   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.744403   69688 addons.go:238] Setting addon default-storageclass=true in "embed-certs-986409"
	W0127 11:47:35.744426   69688 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:35.744451   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.744823   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.744854   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.756768   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0127 11:47:35.757189   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.757717   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.757742   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.758231   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.758430   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.760526   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.762154   69688 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:35.763484   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:35.763499   69688 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:35.763517   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.766471   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.766836   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.766859   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.767027   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.767162   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.767269   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.767362   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.768736   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0127 11:47:35.769217   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.769830   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.769845   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.770259   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.770842   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.770876   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.773590   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0127 11:47:35.774146   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.774722   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.774738   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.774800   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0127 11:47:35.775433   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.775595   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.775820   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.776093   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.776103   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.776797   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.777045   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.777670   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.778791   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.779433   69688 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:35.780791   69688 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:35.782335   69688 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:32.447780   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:32.465728   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:32.465812   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:32.527859   70686 cri.go:89] found id: ""
	I0127 11:47:32.527947   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.527972   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:32.527990   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:32.528104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:32.576073   70686 cri.go:89] found id: ""
	I0127 11:47:32.576171   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.576187   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:32.576195   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:32.576290   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:32.623076   70686 cri.go:89] found id: ""
	I0127 11:47:32.623118   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.623130   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:32.623137   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:32.623225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:32.691228   70686 cri.go:89] found id: ""
	I0127 11:47:32.691318   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.691343   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:32.691362   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:32.691477   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:32.745780   70686 cri.go:89] found id: ""
	I0127 11:47:32.745811   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.745823   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:32.745831   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:32.745906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:32.789692   70686 cri.go:89] found id: ""
	I0127 11:47:32.789731   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.789741   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:32.789751   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:32.789817   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:32.826257   70686 cri.go:89] found id: ""
	I0127 11:47:32.826288   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.826299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:32.826306   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:32.826368   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:32.868284   70686 cri.go:89] found id: ""
	I0127 11:47:32.868309   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.868320   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:32.868332   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:32.868354   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:32.925073   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:32.925103   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:32.941771   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:32.941804   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:33.030670   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:33.030695   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:33.030706   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:33.113430   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:33.113464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:35.663439   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:35.680531   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:35.680611   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:35.722549   70686 cri.go:89] found id: ""
	I0127 11:47:35.722571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.722581   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:35.722589   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:35.722634   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:35.788057   70686 cri.go:89] found id: ""
	I0127 11:47:35.788078   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.788084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:35.788090   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:35.788127   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:35.833279   70686 cri.go:89] found id: ""
	I0127 11:47:35.833300   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.833308   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:35.833314   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:35.833357   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:35.874544   70686 cri.go:89] found id: ""
	I0127 11:47:35.874571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.874582   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:35.874589   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:35.874654   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:35.915199   70686 cri.go:89] found id: ""
	I0127 11:47:35.915230   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.915242   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:35.915249   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:35.915314   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:31.183154   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.184826   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.682393   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.782468   69688 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:35.782484   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:35.782515   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.783769   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:35.783786   69688 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:35.783877   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.786270   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786826   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.786854   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786891   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787046   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787077   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787232   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.787378   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.787671   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.787689   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787707   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787860   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787992   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.788077   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.793305   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0127 11:47:35.793811   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.794453   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.794473   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.794772   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.795062   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.796950   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.797253   69688 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:35.797272   69688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:35.797291   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.800329   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800750   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.800775   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800948   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.801144   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.801274   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.801417   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.954346   69688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:35.990894   69688 node_ready.go:35] waiting up to 6m0s for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021695   69688 node_ready.go:49] node "embed-certs-986409" has status "Ready":"True"
	I0127 11:47:36.021724   69688 node_ready.go:38] duration metric: took 30.797887ms for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021737   69688 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:36.029373   69688 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.075684   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:36.075765   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:36.118613   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:36.128091   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:36.128117   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:36.143161   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:36.143196   69688 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:36.167151   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:36.195969   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:36.196003   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:36.215973   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.216001   69688 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:36.279892   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:36.279930   69688 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:36.302557   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.356672   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:36.356705   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:36.403728   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:36.403755   69688 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:36.490122   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:36.490161   69688 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:36.572014   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:36.572085   69688 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:36.666239   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:36.666266   69688 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:36.784627   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:36.784652   69688 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:36.874981   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:37.244603   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077408875s)
	I0127 11:47:37.244729   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244748   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.244744   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.126101345s)
	I0127 11:47:37.244768   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244778   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246690   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246704   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246699   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246729   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246739   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246747   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246781   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246794   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246804   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246812   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.247222   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247287   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247352   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.247364   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.248606   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.248624   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281282   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.281317   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.281631   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.281653   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281654   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:33.966528   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.970381   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:36.467240   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.467270   69396 pod_ready.go:82] duration metric: took 9.508045614s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.467284   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474274   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.474309   69396 pod_ready.go:82] duration metric: took 7.015963ms for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474322   69396 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480897   69396 pod_ready.go:93] pod "etcd-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.480926   69396 pod_ready.go:82] duration metric: took 6.596204ms for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480938   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487288   69396 pod_ready.go:93] pod "kube-apiserver-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.487320   69396 pod_ready.go:82] duration metric: took 6.372473ms for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487332   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497692   69396 pod_ready.go:93] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.497721   69396 pod_ready.go:82] duration metric: took 10.381356ms for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497733   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864696   69396 pod_ready.go:93] pod "kube-proxy-mct6v" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.864728   69396 pod_ready.go:82] duration metric: took 366.98634ms for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864742   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265304   69396 pod_ready.go:93] pod "kube-scheduler-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:37.265326   69396 pod_ready.go:82] duration metric: took 400.576908ms for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265334   69396 pod_ready.go:39] duration metric: took 10.325386118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:37.265347   69396 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:37.265391   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:37.284810   69396 api_server.go:72] duration metric: took 10.735955735s to wait for apiserver process to appear ...
	I0127 11:47:37.284832   69396 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:37.284859   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:47:37.292026   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0127 11:47:37.293646   69396 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:37.293675   69396 api_server.go:131] duration metric: took 8.835297ms to wait for apiserver health ...
	I0127 11:47:37.293685   69396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:37.469184   69396 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:37.469220   69396 system_pods.go:61] "coredns-668d6bf9bc-nqskc" [a9b24f06-5dc0-4a9e-a8f4-c6f311389c62] Running
	I0127 11:47:37.469228   69396 system_pods.go:61] "coredns-668d6bf9bc-qh6rg" [05780b99-a232-4846-a4b6-111f8d3d386e] Running
	I0127 11:47:37.469234   69396 system_pods.go:61] "etcd-no-preload-273200" [d1362a7f-ee18-4157-b8df-b9a3a9372f0a] Running
	I0127 11:47:37.469240   69396 system_pods.go:61] "kube-apiserver-no-preload-273200" [32c9d6be-2aac-475a-b7ba-0414122f7c6b] Running
	I0127 11:47:37.469247   69396 system_pods.go:61] "kube-controller-manager-no-preload-273200" [1091690b-7b66-4f8d-aa90-567ff97c5c19] Running
	I0127 11:47:37.469252   69396 system_pods.go:61] "kube-proxy-mct6v" [7cd1c7e8-827a-491e-8093-a7a3afc26544] Running
	I0127 11:47:37.469257   69396 system_pods.go:61] "kube-scheduler-no-preload-273200" [fde979de-7c70-4ef8-8d23-6ed01a30bf76] Running
	I0127 11:47:37.469265   69396 system_pods.go:61] "metrics-server-f79f97bbb-z6fn6" [8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:37.469270   69396 system_pods.go:61] "storage-provisioner" [42d86701-11bb-4b1c-a522-ec9e7912d024] Running
	I0127 11:47:37.469280   69396 system_pods.go:74] duration metric: took 175.587004ms to wait for pod list to return data ...
	I0127 11:47:37.469292   69396 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:37.664628   69396 default_sa.go:45] found service account: "default"
	I0127 11:47:37.664664   69396 default_sa.go:55] duration metric: took 195.36433ms for default service account to be created ...
	I0127 11:47:37.664679   69396 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:37.868541   69396 system_pods.go:87] 9 kube-system pods found
	I0127 11:47:37.980174   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.677566724s)
	I0127 11:47:37.980228   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980244   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980560   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980582   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980592   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980601   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980880   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.980939   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980966   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980987   69688 addons.go:479] Verifying addon metrics-server=true in "embed-certs-986409"
	I0127 11:47:38.056288   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:38.999682   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.124629898s)
	I0127 11:47:38.999752   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:38.999775   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000135   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000179   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.000185   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000205   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:39.000220   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000492   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000493   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000507   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.002275   69688 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-986409 addons enable metrics-server
	
	I0127 11:47:39.003930   69688 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:35.952137   70686 cri.go:89] found id: ""
	I0127 11:47:35.952165   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.952175   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:35.952183   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:35.952247   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:35.995842   70686 cri.go:89] found id: ""
	I0127 11:47:35.995870   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.995882   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:35.995889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:35.995946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:36.045603   70686 cri.go:89] found id: ""
	I0127 11:47:36.045629   70686 logs.go:282] 0 containers: []
	W0127 11:47:36.045639   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:36.045647   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:36.045661   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:36.122919   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:36.122952   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:36.141794   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:36.141827   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:36.246196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:36.246229   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:36.246253   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:36.363333   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:36.363378   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.920333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:38.937466   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:38.937549   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:38.982630   70686 cri.go:89] found id: ""
	I0127 11:47:38.982660   70686 logs.go:282] 0 containers: []
	W0127 11:47:38.982672   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:38.982680   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:38.982741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:39.027004   70686 cri.go:89] found id: ""
	I0127 11:47:39.027034   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.027045   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:39.027052   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:39.027114   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:39.068819   70686 cri.go:89] found id: ""
	I0127 11:47:39.068841   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.068849   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:39.068854   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:39.068900   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:39.105724   70686 cri.go:89] found id: ""
	I0127 11:47:39.105758   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.105770   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:39.105779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:39.105849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:39.156156   70686 cri.go:89] found id: ""
	I0127 11:47:39.156183   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.156193   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:39.156200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:39.156257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:39.193966   70686 cri.go:89] found id: ""
	I0127 11:47:39.194002   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.194012   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:39.194021   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:39.194085   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:39.231373   70686 cri.go:89] found id: ""
	I0127 11:47:39.231398   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.231407   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:39.231415   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:39.231479   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:39.278257   70686 cri.go:89] found id: ""
	I0127 11:47:39.278288   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.278299   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:39.278309   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:39.278324   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:39.356076   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:39.356128   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:39.371224   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:39.371259   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:39.446307   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:39.446334   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:39.446350   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:39.543997   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:39.544032   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.182709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:40.681322   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:39.005168   69688 addons.go:514] duration metric: took 3.295073777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:40.536239   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:41.539907   69688 pod_ready.go:93] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:41.539938   69688 pod_ready.go:82] duration metric: took 5.510539517s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:41.539950   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046422   69688 pod_ready.go:93] pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.046450   69688 pod_ready.go:82] duration metric: took 506.490576ms for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046464   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.056999   69688 pod_ready.go:93] pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.057022   69688 pod_ready.go:82] duration metric: took 10.550413ms for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.057033   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066831   69688 pod_ready.go:93] pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.066859   69688 pod_ready.go:82] duration metric: took 9.817042ms for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066869   69688 pod_ready.go:39] duration metric: took 6.045119057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:42.066885   69688 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:42.066943   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.106914   69688 api_server.go:72] duration metric: took 6.396863225s to wait for apiserver process to appear ...
	I0127 11:47:42.106942   69688 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:42.106967   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:47:42.115128   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0127 11:47:42.116724   69688 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:42.116746   69688 api_server.go:131] duration metric: took 9.796211ms to wait for apiserver health ...
	I0127 11:47:42.116753   69688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:42.123449   69688 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:42.123472   69688 system_pods.go:61] "coredns-668d6bf9bc-9sk5f" [c6114990-b336-472e-8720-1ef5ccd3b001] Running
	I0127 11:47:42.123479   69688 system_pods.go:61] "coredns-668d6bf9bc-jvx66" [7eab12a3-7303-43fc-84fa-034ced59689b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:47:42.123486   69688 system_pods.go:61] "etcd-embed-certs-986409" [ebdc15ff-c173-440b-ae1a-c0bc983c015b] Running
	I0127 11:47:42.123491   69688 system_pods.go:61] "kube-apiserver-embed-certs-986409" [3cbf2980-e1b2-4cff-8d01-ab9ec4806976] Running
	I0127 11:47:42.123496   69688 system_pods.go:61] "kube-controller-manager-embed-certs-986409" [642b9798-c605-4987-9d0d-2481f451d943] Running
	I0127 11:47:42.123503   69688 system_pods.go:61] "kube-proxy-b82rc" [08412bee-7381-4d81-bb67-fb39fefc29bb] Running
	I0127 11:47:42.123508   69688 system_pods.go:61] "kube-scheduler-embed-certs-986409" [7774826a-ca31-4662-94db-76f6ccbf07c3] Running
	I0127 11:47:42.123516   69688 system_pods.go:61] "metrics-server-f79f97bbb-pjkmz" [4828c28f-5ef4-48ea-9360-151007c2d9be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:42.123522   69688 system_pods.go:61] "storage-provisioner" [df18a80b-cc75-49f1-bd1a-48bab4776d25] Running
	I0127 11:47:42.123530   69688 system_pods.go:74] duration metric: took 6.771018ms to wait for pod list to return data ...
	I0127 11:47:42.123541   69688 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:42.127202   69688 default_sa.go:45] found service account: "default"
	I0127 11:47:42.127219   69688 default_sa.go:55] duration metric: took 3.6724ms for default service account to be created ...
	I0127 11:47:42.127227   69688 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:42.139808   69688 system_pods.go:87] 9 kube-system pods found
	I0127 11:47:42.081513   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.095014   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:42.095074   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:42.130635   70686 cri.go:89] found id: ""
	I0127 11:47:42.130660   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.130670   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:42.130677   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:42.130741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:42.169363   70686 cri.go:89] found id: ""
	I0127 11:47:42.169394   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.169405   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:42.169415   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:42.169475   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:42.213803   70686 cri.go:89] found id: ""
	I0127 11:47:42.213831   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.213839   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:42.213849   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:42.213911   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:42.249475   70686 cri.go:89] found id: ""
	I0127 11:47:42.249505   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.249516   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:42.249524   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:42.249719   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:42.297727   70686 cri.go:89] found id: ""
	I0127 11:47:42.297753   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.297765   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:42.297770   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:42.297822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:42.340478   70686 cri.go:89] found id: ""
	I0127 11:47:42.340503   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.340513   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:42.340520   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:42.340580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:42.372922   70686 cri.go:89] found id: ""
	I0127 11:47:42.372952   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.372963   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:42.372971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:42.373029   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:42.407938   70686 cri.go:89] found id: ""
	I0127 11:47:42.407967   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.407978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:42.407989   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:42.408005   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:42.484491   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:42.484530   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:42.484553   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:42.579113   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:42.579152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:42.624076   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:42.624105   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:42.679902   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:42.679934   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.194468   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:45.207509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:45.207572   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:45.239999   70686 cri.go:89] found id: ""
	I0127 11:47:45.240028   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.240039   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:45.240046   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:45.240098   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:45.273395   70686 cri.go:89] found id: ""
	I0127 11:47:45.273422   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.273431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:45.273437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:45.273495   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:45.311168   70686 cri.go:89] found id: ""
	I0127 11:47:45.311202   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.311212   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:45.311220   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:45.311284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:45.349465   70686 cri.go:89] found id: ""
	I0127 11:47:45.349491   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.349508   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:45.349513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:45.349568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:45.385823   70686 cri.go:89] found id: ""
	I0127 11:47:45.385848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.385856   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:45.385862   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:45.385919   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:45.426563   70686 cri.go:89] found id: ""
	I0127 11:47:45.426591   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.426603   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:45.426610   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:45.426669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:45.467818   70686 cri.go:89] found id: ""
	I0127 11:47:45.467848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.467856   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:45.467861   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:45.467913   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:45.505509   70686 cri.go:89] found id: ""
	I0127 11:47:45.505551   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.505570   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:45.505581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:45.505595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:45.562102   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:45.562134   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.576502   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:45.576547   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:45.656107   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:45.656179   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:45.656200   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:45.740259   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:45.740307   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:43.182256   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:45.682893   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:48.288077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:48.305506   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:48.305575   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:48.341384   70686 cri.go:89] found id: ""
	I0127 11:47:48.341413   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.341424   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:48.341431   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:48.341490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:48.385225   70686 cri.go:89] found id: ""
	I0127 11:47:48.385256   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.385266   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:48.385273   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:48.385331   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:48.432004   70686 cri.go:89] found id: ""
	I0127 11:47:48.432026   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.432034   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:48.432039   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:48.432096   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:48.467009   70686 cri.go:89] found id: ""
	I0127 11:47:48.467037   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.467047   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:48.467054   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:48.467111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:48.503820   70686 cri.go:89] found id: ""
	I0127 11:47:48.503847   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.503858   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:48.503864   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:48.503909   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:48.538884   70686 cri.go:89] found id: ""
	I0127 11:47:48.538908   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.538915   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:48.538924   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:48.538983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:48.572744   70686 cri.go:89] found id: ""
	I0127 11:47:48.572773   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.572783   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:48.572791   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:48.572853   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:48.610043   70686 cri.go:89] found id: ""
	I0127 11:47:48.610076   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.610086   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:48.610108   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:48.610123   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:48.683427   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:48.683468   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:48.698950   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:48.698984   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:48.771789   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:48.771819   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:48.771833   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:48.852605   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:48.852642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:48.185457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:50.682230   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:51.390888   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:51.403787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:51.403867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:51.438712   70686 cri.go:89] found id: ""
	I0127 11:47:51.438739   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.438746   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:51.438752   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:51.438808   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:51.476783   70686 cri.go:89] found id: ""
	I0127 11:47:51.476811   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.476821   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:51.476829   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:51.476887   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:51.509461   70686 cri.go:89] found id: ""
	I0127 11:47:51.509505   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.509522   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:51.509534   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:51.509592   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:51.545890   70686 cri.go:89] found id: ""
	I0127 11:47:51.545918   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.545936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:51.545943   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:51.546004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:51.582831   70686 cri.go:89] found id: ""
	I0127 11:47:51.582859   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.582868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:51.582876   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:51.582935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:51.618841   70686 cri.go:89] found id: ""
	I0127 11:47:51.618866   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.618874   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:51.618880   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:51.618934   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:51.654004   70686 cri.go:89] found id: ""
	I0127 11:47:51.654037   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.654048   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:51.654055   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:51.654119   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:51.693492   70686 cri.go:89] found id: ""
	I0127 11:47:51.693525   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.693535   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:51.693547   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:51.693561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:51.742871   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:51.742901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:51.756625   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:51.756648   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:51.818231   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:51.818258   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:51.818274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:51.897522   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:51.897556   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.435357   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:54.447575   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:54.447662   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:54.481516   70686 cri.go:89] found id: ""
	I0127 11:47:54.481546   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.481557   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:54.481565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:54.481628   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:54.513468   70686 cri.go:89] found id: ""
	I0127 11:47:54.513494   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.513503   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:54.513510   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:54.513564   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:54.546743   70686 cri.go:89] found id: ""
	I0127 11:47:54.546768   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.546776   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:54.546781   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:54.546837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:54.577457   70686 cri.go:89] found id: ""
	I0127 11:47:54.577495   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.577525   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:54.577533   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:54.577604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:54.607337   70686 cri.go:89] found id: ""
	I0127 11:47:54.607366   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.607375   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:54.607381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:54.607427   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:54.651259   70686 cri.go:89] found id: ""
	I0127 11:47:54.651290   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.651301   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:54.651308   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:54.651369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:54.688579   70686 cri.go:89] found id: ""
	I0127 11:47:54.688604   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.688613   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:54.688619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:54.688678   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:54.725278   70686 cri.go:89] found id: ""
	I0127 11:47:54.725322   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.725341   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:54.725353   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:54.725367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:54.791430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:54.791452   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:54.791465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:54.868163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:54.868191   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.905354   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:54.905385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:54.957412   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:54.957444   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:53.181163   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:55.181247   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:57.471717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:57.484472   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:57.484545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:57.515302   70686 cri.go:89] found id: ""
	I0127 11:47:57.515334   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.515345   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:57.515353   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:57.515412   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:57.548214   70686 cri.go:89] found id: ""
	I0127 11:47:57.548239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.548248   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:57.548255   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:57.548316   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:57.581598   70686 cri.go:89] found id: ""
	I0127 11:47:57.581624   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.581632   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:57.581637   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:57.581682   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:57.617610   70686 cri.go:89] found id: ""
	I0127 11:47:57.617642   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.617654   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:57.617661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:57.617726   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:57.650213   70686 cri.go:89] found id: ""
	I0127 11:47:57.650239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.650246   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:57.650252   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:57.650319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:57.688111   70686 cri.go:89] found id: ""
	I0127 11:47:57.688132   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.688142   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:57.688150   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:57.688197   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:57.720752   70686 cri.go:89] found id: ""
	I0127 11:47:57.720782   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.720792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:57.720798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:57.720845   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:57.751896   70686 cri.go:89] found id: ""
	I0127 11:47:57.751925   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.751936   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:57.751946   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:57.751959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:57.802765   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:57.802797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:57.815299   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:57.815323   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:57.878584   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:57.878612   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:57.878627   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.954926   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:57.954961   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.492831   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:00.505398   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:00.505458   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:00.541546   70686 cri.go:89] found id: ""
	I0127 11:48:00.541572   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.541583   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:00.541590   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:00.541658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:00.574543   70686 cri.go:89] found id: ""
	I0127 11:48:00.574575   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.574585   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:00.574596   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:00.574658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:00.607826   70686 cri.go:89] found id: ""
	I0127 11:48:00.607855   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.607865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:00.607872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:00.607931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:00.642893   70686 cri.go:89] found id: ""
	I0127 11:48:00.642925   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.642936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:00.642944   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:00.642997   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:00.675525   70686 cri.go:89] found id: ""
	I0127 11:48:00.675549   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.675557   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:00.675563   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:00.675642   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:00.708878   70686 cri.go:89] found id: ""
	I0127 11:48:00.708913   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.708921   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:00.708926   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:00.708971   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:00.740471   70686 cri.go:89] found id: ""
	I0127 11:48:00.740505   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.740512   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:00.740518   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:00.740568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:00.776050   70686 cri.go:89] found id: ""
	I0127 11:48:00.776078   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.776088   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:00.776099   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:00.776112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:00.789429   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:00.789465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:00.855134   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:00.855159   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:00.855176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.684463   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:59.175404   70237 pod_ready.go:82] duration metric: took 4m0.000243677s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" ...
	E0127 11:47:59.175451   70237 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:47:59.175501   70237 pod_ready.go:39] duration metric: took 4m10.536256424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:59.175547   70237 kubeadm.go:597] duration metric: took 4m18.512037331s to restartPrimaryControlPlane
	W0127 11:47:59.175647   70237 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:47:59.175705   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:00.932863   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:00.932910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.969770   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:00.969797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.521596   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:03.536040   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:03.536171   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:03.571013   70686 cri.go:89] found id: ""
	I0127 11:48:03.571046   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.571057   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:03.571065   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:03.571128   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:03.605846   70686 cri.go:89] found id: ""
	I0127 11:48:03.605871   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.605879   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:03.605885   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:03.605931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:03.641481   70686 cri.go:89] found id: ""
	I0127 11:48:03.641515   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.641524   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:03.641529   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:03.641595   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:03.676290   70686 cri.go:89] found id: ""
	I0127 11:48:03.676316   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.676326   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:03.676333   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:03.676395   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:03.713213   70686 cri.go:89] found id: ""
	I0127 11:48:03.713235   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.713243   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:03.713248   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:03.713337   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:03.746114   70686 cri.go:89] found id: ""
	I0127 11:48:03.746141   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.746151   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:03.746158   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:03.746217   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:03.780250   70686 cri.go:89] found id: ""
	I0127 11:48:03.780289   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.780299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:03.780307   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:03.780354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:03.817856   70686 cri.go:89] found id: ""
	I0127 11:48:03.817885   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.817896   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:03.817907   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:03.817921   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:03.898728   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:03.898779   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:03.935189   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:03.935222   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.990903   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:03.990946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:04.004559   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:04.004584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:04.078588   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:06.578765   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:06.592073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:06.592134   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:06.624430   70686 cri.go:89] found id: ""
	I0127 11:48:06.624465   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.624476   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:06.624484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:06.624555   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:06.677207   70686 cri.go:89] found id: ""
	I0127 11:48:06.677244   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.677257   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:06.677264   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:06.677346   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:06.718809   70686 cri.go:89] found id: ""
	I0127 11:48:06.718833   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.718840   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:06.718845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:06.718890   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:06.754041   70686 cri.go:89] found id: ""
	I0127 11:48:06.754076   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.754089   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:06.754100   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:06.754160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:06.785748   70686 cri.go:89] found id: ""
	I0127 11:48:06.785776   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.785788   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:06.785795   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:06.785854   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:06.819849   70686 cri.go:89] found id: ""
	I0127 11:48:06.819872   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.819879   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:06.819884   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:06.819930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:06.853347   70686 cri.go:89] found id: ""
	I0127 11:48:06.853372   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.853381   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:06.853387   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:06.853438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:06.885714   70686 cri.go:89] found id: ""
	I0127 11:48:06.885740   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.885747   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:06.885755   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:06.885765   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:06.921805   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:06.921832   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:06.974607   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:06.974638   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:06.987566   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:06.987625   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:07.056872   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:07.056892   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:07.056905   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:09.644164   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:09.657446   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:09.657519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:09.696908   70686 cri.go:89] found id: ""
	I0127 11:48:09.696940   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.696950   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:09.696957   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:09.697016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:09.729636   70686 cri.go:89] found id: ""
	I0127 11:48:09.729665   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.729675   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:09.729682   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:09.729742   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:09.769699   70686 cri.go:89] found id: ""
	I0127 11:48:09.769726   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.769734   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:09.769740   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:09.769791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:09.801315   70686 cri.go:89] found id: ""
	I0127 11:48:09.801360   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.801368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:09.801374   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:09.801432   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:09.831170   70686 cri.go:89] found id: ""
	I0127 11:48:09.831212   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.831221   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:09.831226   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:09.831294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:09.862163   70686 cri.go:89] found id: ""
	I0127 11:48:09.862188   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.862194   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:09.862200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:09.862262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:09.893097   70686 cri.go:89] found id: ""
	I0127 11:48:09.893125   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.893136   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:09.893144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:09.893201   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:09.924215   70686 cri.go:89] found id: ""
	I0127 11:48:09.924247   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.924259   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:09.924269   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:09.924286   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:09.990827   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:09.990849   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:09.990859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:10.063335   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:10.063366   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:10.099158   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:10.099199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:10.150789   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:10.150821   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:12.664524   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:12.677711   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:12.677791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:12.710353   70686 cri.go:89] found id: ""
	I0127 11:48:12.710377   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.710384   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:12.710389   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:12.710443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:12.743545   70686 cri.go:89] found id: ""
	I0127 11:48:12.743572   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.743579   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:12.743584   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:12.743646   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:12.775386   70686 cri.go:89] found id: ""
	I0127 11:48:12.775413   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.775423   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:12.775430   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:12.775488   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:12.808803   70686 cri.go:89] found id: ""
	I0127 11:48:12.808828   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.808835   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:12.808841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:12.808898   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:12.842531   70686 cri.go:89] found id: ""
	I0127 11:48:12.842554   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.842561   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:12.842566   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:12.842610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:12.875470   70686 cri.go:89] found id: ""
	I0127 11:48:12.875501   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.875512   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:12.875522   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:12.875579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:12.908768   70686 cri.go:89] found id: ""
	I0127 11:48:12.908790   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.908797   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:12.908802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:12.908848   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:12.943312   70686 cri.go:89] found id: ""
	I0127 11:48:12.943340   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.943348   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:12.943356   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:12.943368   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:12.995939   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:12.995971   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:13.009006   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:13.009028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:13.097589   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:13.097607   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:13.097618   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:13.180494   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:13.180526   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:15.719725   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:15.733707   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:15.733770   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:15.771051   70686 cri.go:89] found id: ""
	I0127 11:48:15.771076   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.771086   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:15.771094   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:15.771156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:15.803893   70686 cri.go:89] found id: ""
	I0127 11:48:15.803926   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.803938   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:15.803945   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:15.803995   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:15.840882   70686 cri.go:89] found id: ""
	I0127 11:48:15.840915   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.840927   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:15.840935   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:15.840993   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:15.879101   70686 cri.go:89] found id: ""
	I0127 11:48:15.879132   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.879144   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:15.879165   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:15.879227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:15.910272   70686 cri.go:89] found id: ""
	I0127 11:48:15.910306   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.910317   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:15.910325   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:15.910385   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:15.942060   70686 cri.go:89] found id: ""
	I0127 11:48:15.942085   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.942093   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:15.942099   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:15.942160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:15.975105   70686 cri.go:89] found id: ""
	I0127 11:48:15.975136   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.975147   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:15.975155   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:15.975219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:16.009270   70686 cri.go:89] found id: ""
	I0127 11:48:16.009302   70686 logs.go:282] 0 containers: []
	W0127 11:48:16.009313   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:16.009323   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:16.009337   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:16.059868   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:16.059901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:16.074089   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:16.074118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:16.150389   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:16.150435   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:16.150450   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:16.226031   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:16.226070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:18.766131   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:18.780688   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:18.780758   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:18.827413   70686 cri.go:89] found id: ""
	I0127 11:48:18.827443   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.827454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:18.827462   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:18.827528   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:18.890142   70686 cri.go:89] found id: ""
	I0127 11:48:18.890169   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.890179   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:18.890187   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:18.890252   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:18.921896   70686 cri.go:89] found id: ""
	I0127 11:48:18.921925   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.921933   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:18.921938   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:18.921989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:18.956705   70686 cri.go:89] found id: ""
	I0127 11:48:18.956728   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.956736   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:18.956744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:18.956813   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:18.989832   70686 cri.go:89] found id: ""
	I0127 11:48:18.989858   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.989868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:18.989874   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:18.989929   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:19.026132   70686 cri.go:89] found id: ""
	I0127 11:48:19.026159   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.026166   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:19.026173   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:19.026219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:19.059138   70686 cri.go:89] found id: ""
	I0127 11:48:19.059162   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.059170   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:19.059175   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:19.059220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:19.092018   70686 cri.go:89] found id: ""
	I0127 11:48:19.092048   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.092058   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:19.092069   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:19.092085   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:19.167121   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:19.167152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:19.205334   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:19.205364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:19.254602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:19.254639   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:19.268979   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:19.269006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:19.338679   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:21.839791   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:21.852667   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:21.852727   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:21.886171   70686 cri.go:89] found id: ""
	I0127 11:48:21.886197   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.886205   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:21.886210   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:21.886257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:21.921652   70686 cri.go:89] found id: ""
	I0127 11:48:21.921679   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.921689   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:21.921696   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:21.921755   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:21.957643   70686 cri.go:89] found id: ""
	I0127 11:48:21.957670   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.957679   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:21.957686   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:21.957746   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:21.992841   70686 cri.go:89] found id: ""
	I0127 11:48:21.992871   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.992881   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:21.992888   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:21.992952   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:22.028313   70686 cri.go:89] found id: ""
	I0127 11:48:22.028356   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.028365   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:22.028376   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:22.028421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:22.063653   70686 cri.go:89] found id: ""
	I0127 11:48:22.063679   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.063686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:22.063692   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:22.063749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:22.095804   70686 cri.go:89] found id: ""
	I0127 11:48:22.095831   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.095839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:22.095845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:22.095904   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:22.128161   70686 cri.go:89] found id: ""
	I0127 11:48:22.128194   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.128205   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:22.128217   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:22.128231   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:22.166325   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:22.166348   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:22.216549   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:22.216599   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:22.229716   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:22.229745   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:22.295957   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:22.295985   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:22.296000   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:24.876705   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:24.889666   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:24.889741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:24.923871   70686 cri.go:89] found id: ""
	I0127 11:48:24.923904   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.923915   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:24.923923   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:24.923983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:24.959046   70686 cri.go:89] found id: ""
	I0127 11:48:24.959078   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.959090   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:24.959098   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:24.959151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:24.994427   70686 cri.go:89] found id: ""
	I0127 11:48:24.994457   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.994468   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:24.994475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:24.994535   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:25.026201   70686 cri.go:89] found id: ""
	I0127 11:48:25.026230   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.026239   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:25.026247   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:25.026309   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:25.058228   70686 cri.go:89] found id: ""
	I0127 11:48:25.058250   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.058258   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:25.058263   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:25.058319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:25.089128   70686 cri.go:89] found id: ""
	I0127 11:48:25.089165   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.089176   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:25.089186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:25.089262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:25.124376   70686 cri.go:89] found id: ""
	I0127 11:48:25.124404   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.124411   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:25.124417   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:25.124464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:25.157926   70686 cri.go:89] found id: ""
	I0127 11:48:25.157959   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.157970   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:25.157982   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:25.157996   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:25.208316   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:25.208347   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:25.223045   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:25.223070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:25.289735   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:25.289757   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:25.289771   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:25.376030   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:25.376082   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:27.914186   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:27.926651   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:27.926716   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:27.965235   70686 cri.go:89] found id: ""
	I0127 11:48:27.965263   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.965273   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:27.965279   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:27.965334   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:27.999266   70686 cri.go:89] found id: ""
	I0127 11:48:27.999301   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.999312   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:27.999320   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:27.999377   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:28.031394   70686 cri.go:89] found id: ""
	I0127 11:48:28.031442   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.031454   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:28.031462   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:28.031524   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:28.063460   70686 cri.go:89] found id: ""
	I0127 11:48:28.063494   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.063505   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:28.063513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:28.063579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:28.098052   70686 cri.go:89] found id: ""
	I0127 11:48:28.098075   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.098082   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:28.098087   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:28.098133   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:28.132561   70686 cri.go:89] found id: ""
	I0127 11:48:28.132592   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.132601   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:28.132609   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:28.132668   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:28.173166   70686 cri.go:89] found id: ""
	I0127 11:48:28.173197   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.173206   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:28.173212   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:28.173263   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:28.207104   70686 cri.go:89] found id: ""
	I0127 11:48:28.207134   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.207144   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:28.207155   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:28.207169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:28.255860   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:28.255897   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:28.270823   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:28.270849   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:28.340536   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:28.340562   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:28.340577   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:28.424875   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:28.424910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:26.746474   70237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.570747097s)
	I0127 11:48:26.746545   70237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:26.762637   70237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:26.776063   70237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:26.789742   70237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:26.789766   70237 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:26.789818   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:48:26.800449   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:26.800505   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:26.818106   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:48:26.827104   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:26.827167   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:26.844719   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.861215   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:26.861299   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.877899   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:48:26.886638   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:26.886691   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:26.895347   70237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:27.038970   70237 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:34.381659   70237 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:48:34.381747   70237 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:48:34.381834   70237 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:48:34.382006   70237 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:48:34.382166   70237 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:48:34.382273   70237 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:48:34.384155   70237 out.go:235]   - Generating certificates and keys ...
	I0127 11:48:34.384281   70237 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:48:34.384383   70237 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:48:34.384475   70237 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:48:34.384540   70237 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:48:34.384619   70237 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:48:34.384712   70237 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:48:34.384815   70237 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:48:34.384870   70237 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:48:34.384936   70237 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:48:34.385045   70237 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:48:34.385125   70237 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:48:34.385205   70237 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:48:34.385276   70237 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:48:34.385331   70237 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:48:34.385408   70237 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:48:34.385500   70237 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:48:34.385576   70237 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:48:34.385691   70237 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:48:34.385779   70237 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:48:34.387105   70237 out.go:235]   - Booting up control plane ...
	I0127 11:48:34.387208   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:48:34.387285   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:48:34.387359   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:48:34.387457   70237 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:48:34.387545   70237 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:48:34.387589   70237 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:48:34.387728   70237 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:48:34.387818   70237 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:48:34.387875   70237 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001607262s
	I0127 11:48:34.387947   70237 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:48:34.388039   70237 kubeadm.go:310] [api-check] The API server is healthy after 4.002263796s
	I0127 11:48:34.388196   70237 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:48:34.388338   70237 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:48:34.388399   70237 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:48:34.388623   70237 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-407489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:48:34.388706   70237 kubeadm.go:310] [bootstrap-token] Using token: n96bmw.dtq43nz27fzxgr8y
	I0127 11:48:34.390189   70237 out.go:235]   - Configuring RBAC rules ...
	I0127 11:48:34.390316   70237 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:48:34.390409   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:48:34.390579   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:48:34.390756   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:48:34.390876   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:48:34.390986   70237 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:48:34.391159   70237 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:48:34.391231   70237 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:48:34.391299   70237 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:48:34.391310   70237 kubeadm.go:310] 
	I0127 11:48:34.391403   70237 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:48:34.391413   70237 kubeadm.go:310] 
	I0127 11:48:34.391518   70237 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:48:34.391530   70237 kubeadm.go:310] 
	I0127 11:48:34.391577   70237 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:48:34.391699   70237 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:48:34.391769   70237 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:48:34.391776   70237 kubeadm.go:310] 
	I0127 11:48:34.391868   70237 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:48:34.391882   70237 kubeadm.go:310] 
	I0127 11:48:34.391943   70237 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:48:34.391952   70237 kubeadm.go:310] 
	I0127 11:48:34.392024   70237 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:48:34.392099   70237 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:48:34.392204   70237 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:48:34.392219   70237 kubeadm.go:310] 
	I0127 11:48:34.392359   70237 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:48:34.392465   70237 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:48:34.392480   70237 kubeadm.go:310] 
	I0127 11:48:34.392616   70237 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.392829   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:48:34.392944   70237 kubeadm.go:310] 	--control-plane 
	I0127 11:48:34.392963   70237 kubeadm.go:310] 
	I0127 11:48:34.393089   70237 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:48:34.393100   70237 kubeadm.go:310] 
	I0127 11:48:34.393184   70237 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.393325   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:48:34.393340   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:48:34.393350   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:48:34.394995   70237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:48:30.970758   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:30.987346   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:30.987422   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:31.022870   70686 cri.go:89] found id: ""
	I0127 11:48:31.022900   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.022911   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:31.022919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:31.022980   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:31.056491   70686 cri.go:89] found id: ""
	I0127 11:48:31.056519   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.056529   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:31.056537   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:31.056593   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:31.091268   70686 cri.go:89] found id: ""
	I0127 11:48:31.091301   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.091313   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:31.091320   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:31.091378   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:31.124449   70686 cri.go:89] found id: ""
	I0127 11:48:31.124479   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.124489   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:31.124497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:31.124565   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:31.167383   70686 cri.go:89] found id: ""
	I0127 11:48:31.167410   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.167418   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:31.167424   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:31.167473   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:31.205066   70686 cri.go:89] found id: ""
	I0127 11:48:31.205165   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.205185   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:31.205194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:31.205265   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:31.242101   70686 cri.go:89] found id: ""
	I0127 11:48:31.242132   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.242144   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:31.242151   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:31.242208   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:31.278496   70686 cri.go:89] found id: ""
	I0127 11:48:31.278595   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.278610   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:31.278622   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:31.278645   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:31.366805   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:31.366835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:31.366851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:31.445608   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:31.445642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:31.487502   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:31.487529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:31.566139   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:31.566171   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.080397   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:34.094121   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:34.094187   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:34.131591   70686 cri.go:89] found id: ""
	I0127 11:48:34.131635   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.131646   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:34.131654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:34.131711   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:34.167143   70686 cri.go:89] found id: ""
	I0127 11:48:34.167175   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.167185   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:34.167192   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:34.167259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:34.203241   70686 cri.go:89] found id: ""
	I0127 11:48:34.203270   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.203283   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:34.203290   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:34.203349   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:34.238023   70686 cri.go:89] found id: ""
	I0127 11:48:34.238053   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.238061   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:34.238067   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:34.238115   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:34.273362   70686 cri.go:89] found id: ""
	I0127 11:48:34.273388   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.273398   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:34.273406   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:34.273469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:34.310047   70686 cri.go:89] found id: ""
	I0127 11:48:34.310073   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.310084   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:34.310092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:34.310148   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:34.346880   70686 cri.go:89] found id: ""
	I0127 11:48:34.346914   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.346924   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:34.346932   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:34.346987   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:34.382306   70686 cri.go:89] found id: ""
	I0127 11:48:34.382327   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.382339   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:34.382348   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:34.382364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:34.494656   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:34.494697   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:34.541974   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:34.542009   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:34.619534   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:34.619584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.634607   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:34.634631   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:34.705419   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:34.396212   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:48:34.408954   70237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:48:34.431113   70237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:48:34.431252   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:34.431257   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-407489 minikube.k8s.io/updated_at=2025_01_27T11_48_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=default-k8s-diff-port-407489 minikube.k8s.io/primary=true
	I0127 11:48:34.469468   70237 ops.go:34] apiserver oom_adj: -16
	I0127 11:48:34.666106   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.167035   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.667149   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.167156   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.666148   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.167090   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.667139   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.166714   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.666209   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.166966   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.353909   70237 kubeadm.go:1113] duration metric: took 4.922724686s to wait for elevateKubeSystemPrivileges
	I0127 11:48:39.353963   70237 kubeadm.go:394] duration metric: took 4m58.742572387s to StartCluster
	I0127 11:48:39.353997   70237 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.354112   70237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:48:39.356217   70237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.356516   70237 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:48:39.356640   70237 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:48:39.356750   70237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356786   70237 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356793   70237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356805   70237 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356806   70237 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356812   70237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-407489"
	W0127 11:48:39.356815   70237 addons.go:247] addon metrics-server should already be in state true
	W0127 11:48:39.356814   70237 addons.go:247] addon dashboard should already be in state true
	W0127 11:48:39.356785   70237 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356919   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356780   70237 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.357367   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357421   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357452   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357461   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357470   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357481   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357489   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357427   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.358335   70237 out.go:177] * Verifying Kubernetes components...
	I0127 11:48:39.359875   70237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:48:39.375814   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0127 11:48:39.376161   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0127 11:48:39.376320   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376584   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376816   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376834   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.376964   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376976   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.377329   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.377542   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.377878   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.378406   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.378448   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.378664   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0127 11:48:39.378707   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0127 11:48:39.379469   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.379520   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.380020   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.380031   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.380391   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.380901   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.380937   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.381376   70237 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-407489"
	W0127 11:48:39.381392   70237 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:48:39.381420   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.381774   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.381828   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.382425   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.382444   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.382932   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.383472   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.383515   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.399683   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0127 11:48:39.400302   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.400882   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.400901   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.401296   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.401495   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0127 11:48:39.401654   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0127 11:48:39.401894   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.401947   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402556   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402892   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402909   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.402980   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402997   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.403362   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0127 11:48:39.403805   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.403823   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.404268   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.404296   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.404472   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.404848   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.404929   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.405710   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.405726   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.406261   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.406477   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.406675   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.407171   70237 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:48:39.408344   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.408427   70237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:48:39.409688   70237 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:48:39.409753   70237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:48:37.206052   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:37.219444   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:37.219530   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:37.254304   70686 cri.go:89] found id: ""
	I0127 11:48:37.254334   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.254342   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:37.254349   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:37.254409   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:37.291229   70686 cri.go:89] found id: ""
	I0127 11:48:37.291264   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.291276   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:37.291289   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:37.291353   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:37.329358   70686 cri.go:89] found id: ""
	I0127 11:48:37.329381   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.329389   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:37.329394   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:37.329439   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:37.368500   70686 cri.go:89] found id: ""
	I0127 11:48:37.368529   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.368537   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:37.368543   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:37.368604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:37.400175   70686 cri.go:89] found id: ""
	I0127 11:48:37.400203   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.400213   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:37.400221   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:37.400284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:37.432661   70686 cri.go:89] found id: ""
	I0127 11:48:37.432687   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.432697   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:37.432704   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:37.432762   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:37.464843   70686 cri.go:89] found id: ""
	I0127 11:48:37.464886   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.464897   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:37.464905   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:37.464970   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:37.501795   70686 cri.go:89] found id: ""
	I0127 11:48:37.501818   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.501826   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:37.501835   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:37.501845   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:37.580256   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:37.580281   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:37.580297   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:37.658741   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:37.658790   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:37.701171   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:37.701198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:37.761906   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:37.761941   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.280848   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:40.294890   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:40.294962   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:40.333860   70686 cri.go:89] found id: ""
	I0127 11:48:40.333885   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.333904   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:40.333919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:40.333983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:40.377039   70686 cri.go:89] found id: ""
	I0127 11:48:40.377072   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.377083   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:40.377093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:40.377157   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:40.413874   70686 cri.go:89] found id: ""
	I0127 11:48:40.413899   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.413909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:40.413915   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:40.413976   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:40.453270   70686 cri.go:89] found id: ""
	I0127 11:48:40.453302   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.453313   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:40.453322   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:40.453438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:40.495704   70686 cri.go:89] found id: ""
	I0127 11:48:40.495739   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.495750   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:40.495759   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:40.495825   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:40.541078   70686 cri.go:89] found id: ""
	I0127 11:48:40.541117   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.541128   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:40.541135   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:40.541195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:40.577161   70686 cri.go:89] found id: ""
	I0127 11:48:40.577190   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.577201   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:40.577207   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:40.577267   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:40.611784   70686 cri.go:89] found id: ""
	I0127 11:48:40.611815   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.611825   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:40.611837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:40.611851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.627400   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:40.627429   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:40.697583   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:40.697609   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:40.697624   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:40.779493   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:40.779529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:40.829083   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:40.829117   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:39.409927   70237 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.409949   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:48:39.409969   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410883   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:48:39.410891   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:48:39.410900   70237 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:48:39.410901   70237 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.414712   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415032   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415363   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415380   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415508   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415557   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.415793   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415795   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.415811   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415965   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416023   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416188   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.416193   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416207   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.416226   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.416326   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416464   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416647   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416856   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.417093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.417232   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.425335   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0127 11:48:39.425726   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.426147   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.426164   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.426496   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.426691   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.428519   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.428734   70237 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.428750   70237 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:48:39.428767   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.431736   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.431955   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.431979   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.432148   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.432352   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.432522   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.432669   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.622216   70237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:48:39.650134   70237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677286   70237 node_ready.go:49] node "default-k8s-diff-port-407489" has status "Ready":"True"
	I0127 11:48:39.677309   70237 node_ready.go:38] duration metric: took 27.135622ms for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677318   70237 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:39.687667   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:39.731665   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.746831   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.793916   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:48:39.793939   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:48:39.875140   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:48:39.875167   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:48:39.930947   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:48:39.930970   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:48:39.943793   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:48:39.943816   70237 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:48:39.993962   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:48:39.993993   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:48:40.041925   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:48:40.041962   70237 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:48:40.045715   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:48:40.045733   70237 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:48:40.168240   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:48:40.168261   70237 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:48:40.170308   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.170329   70237 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:48:40.222208   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:48:40.222229   70237 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:48:40.226028   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.312875   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:48:40.312990   70237 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:48:40.389058   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.389088   70237 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:48:40.437979   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.764016   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017148966s)
	I0127 11:48:40.764080   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764098   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032393238s)
	I0127 11:48:40.764145   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764163   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764466   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764476   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:40.764483   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764520   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764535   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764525   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764555   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764564   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764785   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764804   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764924   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.781921   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.781947   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.782236   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.782254   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294495   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.068429548s)
	I0127 11:48:41.294547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294560   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.294909   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.294914   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.294937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294945   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294952   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.295173   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.295220   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.295238   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.295255   70237 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:41.723523   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:41.929362   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.491326001s)
	I0127 11:48:41.929422   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929437   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.929779   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.929797   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.929815   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929825   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.930103   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.930125   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.930151   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.931487   70237 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-407489 addons enable metrics-server
	
	I0127 11:48:41.933107   70237 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:48:43.382411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:43.399629   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:43.399702   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:43.433083   70686 cri.go:89] found id: ""
	I0127 11:48:43.433116   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.433127   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:43.433134   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:43.433207   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:43.471725   70686 cri.go:89] found id: ""
	I0127 11:48:43.471756   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.471788   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:43.471796   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:43.471861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:43.505911   70686 cri.go:89] found id: ""
	I0127 11:48:43.505944   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.505956   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:43.505964   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:43.506034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:43.545670   70686 cri.go:89] found id: ""
	I0127 11:48:43.545705   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.545715   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:43.545723   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:43.545773   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:43.588086   70686 cri.go:89] found id: ""
	I0127 11:48:43.588113   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.588124   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:43.588131   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:43.588193   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:43.626703   70686 cri.go:89] found id: ""
	I0127 11:48:43.626739   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.626747   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:43.626754   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:43.626810   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:43.666123   70686 cri.go:89] found id: ""
	I0127 11:48:43.666155   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.666164   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:43.666171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:43.666237   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:43.701503   70686 cri.go:89] found id: ""
	I0127 11:48:43.701527   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.701537   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:43.701548   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:43.701561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:43.752145   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:43.752177   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:43.766551   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:43.766579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:43.838715   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:43.838740   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:43.838753   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:43.923406   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:43.923439   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:41.934427   70237 addons.go:514] duration metric: took 2.577793658s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:48:44.193593   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:46.470479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:46.483541   70686 kubeadm.go:597] duration metric: took 4m2.154865283s to restartPrimaryControlPlane
	W0127 11:48:46.483635   70686 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:48:46.483664   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:46.956612   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:46.970448   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:46.979726   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:46.990401   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:46.990418   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:46.990456   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:48:46.999850   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:46.999921   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:47.009371   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:48:47.019126   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:47.019177   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:47.029905   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.040611   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:47.040690   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.051767   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:48:47.063007   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:47.063076   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:47.074431   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:47.304989   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:46.196598   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:48.696840   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:49.199550   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.199588   70237 pod_ready.go:82] duration metric: took 9.511896787s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.199600   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205893   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.205926   70237 pod_ready.go:82] duration metric: took 6.298932ms for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205940   70237 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239052   70237 pod_ready.go:93] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.239081   70237 pod_ready.go:82] duration metric: took 33.131129ms for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239094   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265456   70237 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.265491   70237 pod_ready.go:82] duration metric: took 26.386948ms for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265505   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272301   70237 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.272330   70237 pod_ready.go:82] duration metric: took 6.816295ms for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272342   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591592   70237 pod_ready.go:93] pod "kube-proxy-26pw8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.591640   70237 pod_ready.go:82] duration metric: took 319.289955ms for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591655   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991689   70237 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.991721   70237 pod_ready.go:82] duration metric: took 400.056967ms for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991733   70237 pod_ready.go:39] duration metric: took 10.314402994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:49.991751   70237 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:48:49.991813   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:50.013067   70237 api_server.go:72] duration metric: took 10.656516392s to wait for apiserver process to appear ...
	I0127 11:48:50.013088   70237 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:48:50.013114   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:48:50.018115   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 200:
	ok
	I0127 11:48:50.019049   70237 api_server.go:141] control plane version: v1.32.1
	I0127 11:48:50.019078   70237 api_server.go:131] duration metric: took 5.982015ms to wait for apiserver health ...
	I0127 11:48:50.019088   70237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:48:50.196032   70237 system_pods.go:59] 9 kube-system pods found
	I0127 11:48:50.196064   70237 system_pods.go:61] "coredns-668d6bf9bc-pd5ml" [c33b4c24-e93a-4370-a289-6dca24315394] Running
	I0127 11:48:50.196070   70237 system_pods.go:61] "coredns-668d6bf9bc-sdf87" [30fc6237-1829-4315-b9cf-3354bd7a96a5] Running
	I0127 11:48:50.196075   70237 system_pods.go:61] "etcd-default-k8s-diff-port-407489" [d228476b-110d-4de7-9afe-08c2371bbb0e] Running
	I0127 11:48:50.196079   70237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-407489" [a059a0c6-34f1-46c3-9b67-adef842174f9] Running
	I0127 11:48:50.196083   70237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-407489" [aa65ad17-6f66-42c1-ad23-199b374d2104] Running
	I0127 11:48:50.196087   70237 system_pods.go:61] "kube-proxy-26pw8" [c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510] Running
	I0127 11:48:50.196090   70237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-407489" [190cc5cb-ab22-4143-a84a-3c4d975728c3] Running
	I0127 11:48:50.196098   70237 system_pods.go:61] "metrics-server-f79f97bbb-d7r6d" [6bd8680e-8338-48a2-b29b-a913d195bc9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:48:50.196102   70237 system_pods.go:61] "storage-provisioner" [58b014bb-8629-4398-a2ec-6ec95fa59111] Running
	I0127 11:48:50.196111   70237 system_pods.go:74] duration metric: took 177.016669ms to wait for pod list to return data ...
	I0127 11:48:50.196118   70237 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:48:50.392617   70237 default_sa.go:45] found service account: "default"
	I0127 11:48:50.392652   70237 default_sa.go:55] duration metric: took 196.52383ms for default service account to be created ...
	I0127 11:48:50.392664   70237 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:48:50.594360   70237 system_pods.go:87] 9 kube-system pods found
	I0127 11:50:43.920463   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:50:43.920584   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:50:43.922146   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:43.922214   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:43.922320   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:43.922480   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:43.922613   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:43.922673   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:43.924430   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:43.924530   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:43.924611   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:43.924680   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:43.924766   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:43.924851   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:43.924925   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:43.924977   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:43.925025   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:43.925150   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:43.925259   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:43.925316   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:43.925398   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:43.925467   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:43.925544   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:43.925633   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:43.925704   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:43.925839   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:43.925952   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:43.926012   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:43.926098   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:43.927567   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:43.927670   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:43.927749   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:43.927813   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:43.927885   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:43.928078   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:50:43.928123   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:50:43.928184   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928340   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928398   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928569   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928631   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928792   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928850   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929077   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929185   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929391   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929402   70686 kubeadm.go:310] 
	I0127 11:50:43.929456   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:50:43.929518   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:50:43.929531   70686 kubeadm.go:310] 
	I0127 11:50:43.929584   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:50:43.929647   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:50:43.929784   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:50:43.929800   70686 kubeadm.go:310] 
	I0127 11:50:43.929915   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:50:43.929961   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:50:43.930009   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:50:43.930019   70686 kubeadm.go:310] 
	I0127 11:50:43.930137   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:50:43.930253   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:50:43.930266   70686 kubeadm.go:310] 
	I0127 11:50:43.930419   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:50:43.930528   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:50:43.930621   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:50:43.930695   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:50:43.930745   70686 kubeadm.go:310] 
	W0127 11:50:43.930804   70686 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 11:50:43.930840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:50:44.381980   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:50:44.397504   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:50:44.407258   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:50:44.407280   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:50:44.407331   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:50:44.416517   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:50:44.416588   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:50:44.425543   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:50:44.433996   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:50:44.434043   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:50:44.442792   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.452342   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:50:44.452410   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.462650   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:50:44.471925   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:50:44.471985   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:50:44.481004   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:50:44.552326   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:44.552414   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:44.696875   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:44.697032   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:44.697169   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:44.872468   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:44.875109   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:44.875201   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:44.875263   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:44.875350   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:44.875402   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:44.875466   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:44.875514   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:44.875570   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:44.875679   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:44.875792   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:44.875910   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:44.875976   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:44.876030   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:45.015504   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:45.106020   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:45.326707   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:45.574018   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:45.595960   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:45.597194   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:45.597402   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:45.740527   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:45.743100   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:45.743237   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:45.746496   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:45.747484   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:45.748125   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:45.750039   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:51:25.751949   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:51:25.752243   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:25.752539   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:30.752865   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:30.753104   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:40.753548   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:40.753726   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:00.754215   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:00.754448   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753038   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:40.753327   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753353   70686 kubeadm.go:310] 
	I0127 11:52:40.753414   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:52:40.753473   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:52:40.753483   70686 kubeadm.go:310] 
	I0127 11:52:40.753541   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:52:40.753590   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:52:40.753730   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:52:40.753743   70686 kubeadm.go:310] 
	I0127 11:52:40.753898   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:52:40.753957   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:52:40.754014   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:52:40.754030   70686 kubeadm.go:310] 
	I0127 11:52:40.754195   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:52:40.754312   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:52:40.754321   70686 kubeadm.go:310] 
	I0127 11:52:40.754453   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:52:40.754573   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:52:40.754670   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:52:40.754766   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:52:40.754777   70686 kubeadm.go:310] 
	I0127 11:52:40.755376   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:52:40.755478   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:52:40.755572   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:52:40.755648   70686 kubeadm.go:394] duration metric: took 7m56.47359007s to StartCluster
	I0127 11:52:40.755695   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:52:40.755757   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:52:40.792993   70686 cri.go:89] found id: ""
	I0127 11:52:40.793026   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.793045   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:52:40.793055   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:52:40.793116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:52:40.832368   70686 cri.go:89] found id: ""
	I0127 11:52:40.832397   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.832410   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:52:40.832417   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:52:40.832478   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:52:40.865175   70686 cri.go:89] found id: ""
	I0127 11:52:40.865199   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.865208   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:52:40.865215   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:52:40.865280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:52:40.896556   70686 cri.go:89] found id: ""
	I0127 11:52:40.896586   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.896594   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:52:40.896600   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:52:40.896648   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:52:40.928729   70686 cri.go:89] found id: ""
	I0127 11:52:40.928765   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.928777   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:52:40.928784   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:52:40.928852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:52:40.962998   70686 cri.go:89] found id: ""
	I0127 11:52:40.963029   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.963039   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:52:40.963053   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:52:40.963111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:52:40.994577   70686 cri.go:89] found id: ""
	I0127 11:52:40.994606   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.994616   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:52:40.994623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:52:40.994669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:52:41.030825   70686 cri.go:89] found id: ""
	I0127 11:52:41.030861   70686 logs.go:282] 0 containers: []
	W0127 11:52:41.030872   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:52:41.030884   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:52:41.030900   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:52:41.084683   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:52:41.084714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:52:41.098908   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:52:41.098946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:52:41.176430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:52:41.176453   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:52:41.176465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:52:41.290183   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:52:41.290219   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 11:52:41.336066   70686 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 11:52:41.336124   70686 out.go:270] * 
	W0127 11:52:41.336202   70686 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.336227   70686 out.go:270] * 
	W0127 11:52:41.337558   70686 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 11:52:41.341361   70686 out.go:201] 
	W0127 11:52:41.342596   70686 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.342686   70686 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 11:52:41.342709   70686 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 11:52:41.344162   70686 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.649311004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737978762649290336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87e3a6de-5514-497c-bc14-73e29de9e4e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.649825774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5aa626b-01e7-43fd-b865-d7beb5737763 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.649914510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5aa626b-01e7-43fd-b865-d7beb5737763 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.649962413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a5aa626b-01e7-43fd-b865-d7beb5737763 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.686384765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a285870-a2d2-43b3-8217-fa2081c6cd54 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.686520245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a285870-a2d2-43b3-8217-fa2081c6cd54 name=/runtime.v1.RuntimeService/Version
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.687816687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c450d2f1-4961-4002-acb9-552884ea8d9c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.688169669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737978762688151611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c450d2f1-4961-4002-acb9-552884ea8d9c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.691618018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5311546-61f4-418b-ae91-2576d1845112 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.691694445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5311546-61f4-418b-ae91-2576d1845112 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.691745464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c5311546-61f4-418b-ae91-2576d1845112 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.723683378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39b08bfe-1b73-494e-8ed1-8f155d132c1a name=/runtime.v1.RuntimeService/Version
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.723771874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39b08bfe-1b73-494e-8ed1-8f155d132c1a name=/runtime.v1.RuntimeService/Version
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.724792006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=045c9175-bd50-47b4-b49e-6a3b75cbf5e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.725219216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737978762725195374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=045c9175-bd50-47b4-b49e-6a3b75cbf5e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.725807102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8f963db-b4b9-48bf-b415-7c7ad7f874b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.725876310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8f963db-b4b9-48bf-b415-7c7ad7f874b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.725916926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b8f963db-b4b9-48bf-b415-7c7ad7f874b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.755011389Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82950032-3944-4f5b-99cd-3b1ac52dcd8c name=/runtime.v1.RuntimeService/Version
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.755088997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82950032-3944-4f5b-99cd-3b1ac52dcd8c name=/runtime.v1.RuntimeService/Version
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.755934932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c76a651-ae24-49d9-a32f-8c55414cb95d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.756280968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737978762756262576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c76a651-ae24-49d9-a32f-8c55414cb95d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.756671366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=149dec6a-0f07-40b5-833d-39693afd85ab name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.756727394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=149dec6a-0f07-40b5-833d-39693afd85ab name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 11:52:42 old-k8s-version-570778 crio[639]: time="2025-01-27 11:52:42.756761004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=149dec6a-0f07-40b5-833d-39693afd85ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 11:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049235] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.981407] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.993552] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591001] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.590314] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.056000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054815] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.178788] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.126988] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.243997] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.090921] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.064410] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.869247] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[ +12.042296] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 11:48] systemd-fstab-generator[5058]: Ignoring "noauto" option for root device
	[Jan27 11:50] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +0.066337] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:52:42 up 8 min,  0 users,  load average: 0.02, 0.14, 0.09
	Linux old-k8s-version-570778 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c4d1a0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000c1ab10, 0x24, 0x0, ...)
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: net.(*Dialer).DialContext(0xc000b720c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c1ab10, 0x24, 0x0, 0x0, 0x0, ...)
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b6efa0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c1ab10, 0x24, 0x60, 0x7fdc2507c810, 0x118, ...)
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: net/http.(*Transport).dial(0xc0007f3540, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c1ab10, 0x24, 0x0, 0x0, 0x0, ...)
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: net/http.(*Transport).dialConn(0xc0007f3540, 0x4f7fe00, 0xc000052030, 0x0, 0xc00034c600, 0x5, 0xc000c1ab10, 0x24, 0x0, 0xc000b97e60, ...)
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: net/http.(*Transport).dialConnFor(0xc0007f3540, 0xc000bbbce0)
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]: created by net/http.(*Transport).queueForDial
	Jan 27 11:52:40 old-k8s-version-570778 kubelet[5517]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jan 27 11:52:40 old-k8s-version-570778 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 11:52:40 old-k8s-version-570778 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 11:52:41 old-k8s-version-570778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 27 11:52:41 old-k8s-version-570778 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 11:52:41 old-k8s-version-570778 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 11:52:41 old-k8s-version-570778 kubelet[5579]: I0127 11:52:41.414829    5579 server.go:416] Version: v1.20.0
	Jan 27 11:52:41 old-k8s-version-570778 kubelet[5579]: I0127 11:52:41.415110    5579 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 11:52:41 old-k8s-version-570778 kubelet[5579]: I0127 11:52:41.417248    5579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 11:52:41 old-k8s-version-570778 kubelet[5579]: W0127 11:52:41.418084    5579 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 11:52:41 old-k8s-version-570778 kubelet[5579]: I0127 11:52:41.418379    5579 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (235.386342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-570778" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 11:53:57.624598   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 11:54:26.925140   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 11:57:34.555504   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 11:59:26.924619   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (225.764293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-570778" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (232.044984ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-570778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-570778 logs -n 25: (1.007480617s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-429764 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | disable-driver-mounts-429764                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:41 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-273200             | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-986409            | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-407489  | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:43 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273200                  | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-986409                 | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC | 27 Jan 25 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570778        | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-407489       | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC | 27 Jan 25 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC |                     |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570778             | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:44:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:44:15.929598   70686 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:44:15.929689   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929697   70686 out.go:358] Setting ErrFile to fd 2...
	I0127 11:44:15.929701   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929887   70686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:44:15.930463   70686 out.go:352] Setting JSON to false
	I0127 11:44:15.931400   70686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8756,"bootTime":1737969500,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:44:15.931492   70686 start.go:139] virtualization: kvm guest
	I0127 11:44:15.933961   70686 out.go:177] * [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:44:15.935491   70686 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:44:15.935496   70686 notify.go:220] Checking for updates...
	I0127 11:44:15.938050   70686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:44:15.939411   70686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:15.940688   70686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:44:15.942034   70686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:44:15.943410   70686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:44:12.181135   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.681538   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:15.945138   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:15.945529   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.945574   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.962483   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0127 11:44:15.963003   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.963519   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.963555   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.963966   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.964195   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:15.965767   70686 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:44:15.966927   70686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:44:15.967285   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.967321   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.981938   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0127 11:44:15.982353   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.982892   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.982918   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.983289   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.984121   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.021180   70686 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:44:16.022570   70686 start.go:297] selected driver: kvm2
	I0127 11:44:16.022584   70686 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
70778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.022687   70686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:44:16.023358   70686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.023431   70686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:44:16.038219   70686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:44:16.038645   70686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:44:16.038674   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:16.038706   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:16.038739   70686 start.go:340] cluster config:
	{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.038822   70686 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.041030   70686 out.go:177] * Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	I0127 11:44:16.042127   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:16.042176   70686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:44:16.042189   70686 cache.go:56] Caching tarball of preloaded images
	I0127 11:44:16.042300   70686 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:44:16.042314   70686 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 11:44:16.042429   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:16.042632   70686 start.go:360] acquireMachinesLock for old-k8s-version-570778: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:44:16.042691   70686 start.go:364] duration metric: took 36.964µs to acquireMachinesLock for "old-k8s-version-570778"
	I0127 11:44:16.042707   70686 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:44:16.042713   70686 fix.go:54] fixHost starting: 
	I0127 11:44:16.043141   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:16.043185   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:16.057334   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0127 11:44:16.057814   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:16.058319   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:16.058342   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:16.059617   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:16.060717   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.060891   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetState
	I0127 11:44:16.062560   70686 fix.go:112] recreateIfNeeded on old-k8s-version-570778: state=Stopped err=<nil>
	I0127 11:44:16.062584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	W0127 11:44:16.062740   70686 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:44:16.064407   70686 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570778" ...
	I0127 11:44:14.581269   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.080972   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.765953   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.266323   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:16.065876   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .Start
	I0127 11:44:16.066119   70686 main.go:141] libmachine: (old-k8s-version-570778) starting domain...
	I0127 11:44:16.066142   70686 main.go:141] libmachine: (old-k8s-version-570778) ensuring networks are active...
	I0127 11:44:16.066789   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network default is active
	I0127 11:44:16.067106   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network mk-old-k8s-version-570778 is active
	I0127 11:44:16.067438   70686 main.go:141] libmachine: (old-k8s-version-570778) getting domain XML...
	I0127 11:44:16.068030   70686 main.go:141] libmachine: (old-k8s-version-570778) creating domain...
	I0127 11:44:17.326422   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for IP...
	I0127 11:44:17.327356   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.327887   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.327973   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.327883   70721 retry.go:31] will retry after 224.653843ms: waiting for domain to come up
	I0127 11:44:17.554516   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.555006   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.555033   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.554963   70721 retry.go:31] will retry after 278.652732ms: waiting for domain to come up
	I0127 11:44:17.835676   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.836235   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.836263   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.836216   70721 retry.go:31] will retry after 413.765366ms: waiting for domain to come up
	I0127 11:44:18.251786   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.252318   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.252359   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.252291   70721 retry.go:31] will retry after 384.166802ms: waiting for domain to come up
	I0127 11:44:18.637567   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.638099   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.638123   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.638055   70721 retry.go:31] will retry after 472.449239ms: waiting for domain to come up
	I0127 11:44:19.112411   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.112876   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.112900   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.112842   70721 retry.go:31] will retry after 883.60392ms: waiting for domain to come up
	I0127 11:44:19.997950   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.998399   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.998421   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.998373   70721 retry.go:31] will retry after 736.173761ms: waiting for domain to come up
	I0127 11:44:20.736442   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:20.736964   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:20.737021   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:20.736930   70721 retry.go:31] will retry after 1.379977469s: waiting for domain to come up
	I0127 11:44:17.182032   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.184122   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.581213   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.079928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.765581   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.265882   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.118774   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:22.119315   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:22.119346   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:22.119278   70721 retry.go:31] will retry after 1.846963021s: waiting for domain to come up
	I0127 11:44:23.968284   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:23.968756   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:23.968788   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:23.968709   70721 retry.go:31] will retry after 1.595738144s: waiting for domain to come up
	I0127 11:44:25.565970   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:25.566464   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:25.566496   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:25.566430   70721 retry.go:31] will retry after 2.837671431s: waiting for domain to come up
	I0127 11:44:21.681373   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:23.682555   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.080232   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.080547   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.764338   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.766071   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.405715   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:28.406305   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:28.406335   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:28.406277   70721 retry.go:31] will retry after 3.421231106s: waiting for domain to come up
	I0127 11:44:26.181747   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.681419   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.681567   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.081045   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.579496   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.580035   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:29.264366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.264892   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.828582   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:31.829032   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:31.829085   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:31.829004   70721 retry.go:31] will retry after 3.418527811s: waiting for domain to come up
	I0127 11:44:35.249695   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250229   70686 main.go:141] libmachine: (old-k8s-version-570778) found domain IP: 192.168.50.193
	I0127 11:44:35.250264   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has current primary IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250273   70686 main.go:141] libmachine: (old-k8s-version-570778) reserving static IP address...
	I0127 11:44:35.250765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.250797   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | skip adding static IP to network mk-old-k8s-version-570778 - found existing host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"}
	I0127 11:44:35.250814   70686 main.go:141] libmachine: (old-k8s-version-570778) reserved static IP address 192.168.50.193 for domain old-k8s-version-570778
	I0127 11:44:35.250832   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for SSH...
	I0127 11:44:35.250848   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Getting to WaitForSSH function...
	I0127 11:44:35.253216   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253538   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.253571   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253691   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH client type: external
	I0127 11:44:35.253719   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa (-rw-------)
	I0127 11:44:35.253750   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:44:35.253765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | About to run SSH command:
	I0127 11:44:35.253782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | exit 0
	I0127 11:44:35.375237   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | SSH cmd err, output: <nil>: 
	I0127 11:44:35.375580   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:44:35.376204   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.378824   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379163   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.379195   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379421   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:35.379692   70686 machine.go:93] provisionDockerMachine start ...
	I0127 11:44:35.379720   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:35.379910   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.382057   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382361   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.382392   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382559   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.382738   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.382901   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.383079   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.383243   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.383528   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.383542   70686 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:44:35.483536   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:44:35.483585   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.483889   70686 buildroot.go:166] provisioning hostname "old-k8s-version-570778"
	I0127 11:44:35.483924   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.484119   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.487189   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487543   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.487569   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487813   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.488019   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488147   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488310   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.488454   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.488629   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.488641   70686 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570778 && echo "old-k8s-version-570778" | sudo tee /etc/hostname
	I0127 11:44:35.606107   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570778
	
	I0127 11:44:35.606140   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.609822   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610293   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.610329   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610472   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.610663   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610815   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610983   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.611167   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.611325   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.611342   70686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570778/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:44:35.720742   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:44:35.720779   70686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:44:35.720803   70686 buildroot.go:174] setting up certificates
	I0127 11:44:35.720814   70686 provision.go:84] configureAuth start
	I0127 11:44:35.720826   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.721065   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.723782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724254   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.724290   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724483   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.726871   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.727196   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727322   70686 provision.go:143] copyHostCerts
	I0127 11:44:35.727369   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:44:35.727384   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:44:35.727452   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:44:35.727537   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:44:35.727545   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:44:35.727569   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:44:35.727649   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:44:35.727659   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:44:35.727686   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:44:35.727741   70686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570778 san=[127.0.0.1 192.168.50.193 localhost minikube old-k8s-version-570778]
	I0127 11:44:35.901422   70686 provision.go:177] copyRemoteCerts
	I0127 11:44:35.901473   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:44:35.901501   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.904015   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904354   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.904378   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904597   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.904771   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.904967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.905126   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:32.681781   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:34.682249   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.078928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.079470   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.985261   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:44:36.008090   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:44:36.031357   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:44:36.053784   70686 provision.go:87] duration metric: took 332.958985ms to configureAuth
	I0127 11:44:36.053812   70686 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:44:36.053986   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:36.054066   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.056825   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.057186   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057398   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.057612   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057801   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.058191   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.058400   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.058425   70686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:44:36.280974   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:44:36.281007   70686 machine.go:96] duration metric: took 901.295604ms to provisionDockerMachine
	I0127 11:44:36.281020   70686 start.go:293] postStartSetup for "old-k8s-version-570778" (driver="kvm2")
	I0127 11:44:36.281033   70686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:44:36.281048   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.281334   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:44:36.281366   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.283980   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284452   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.284493   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284602   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.284759   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.284915   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.285033   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.361994   70686 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:44:36.366066   70686 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:44:36.366085   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:44:36.366142   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:44:36.366211   70686 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:44:36.366293   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:44:36.374729   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:36.396427   70686 start.go:296] duration metric: took 115.392742ms for postStartSetup
	I0127 11:44:36.396468   70686 fix.go:56] duration metric: took 20.353754717s for fixHost
	I0127 11:44:36.396491   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.399680   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400070   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.400097   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400246   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.400438   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400591   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400821   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.401019   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.401189   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.401200   70686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:44:36.500185   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978276.474640374
	
	I0127 11:44:36.500211   70686 fix.go:216] guest clock: 1737978276.474640374
	I0127 11:44:36.500221   70686 fix.go:229] Guest: 2025-01-27 11:44:36.474640374 +0000 UTC Remote: 2025-01-27 11:44:36.396473102 +0000 UTC m=+20.504127240 (delta=78.167272ms)
	I0127 11:44:36.500239   70686 fix.go:200] guest clock delta is within tolerance: 78.167272ms
	I0127 11:44:36.500256   70686 start.go:83] releasing machines lock for "old-k8s-version-570778", held for 20.457556974s
	I0127 11:44:36.500274   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.500555   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:36.503395   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503819   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.503860   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503969   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504404   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504676   70686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:44:36.504723   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.504778   70686 ssh_runner.go:195] Run: cat /version.json
	I0127 11:44:36.504802   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.507787   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.507815   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508140   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508175   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508207   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508225   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508347   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508547   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508557   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508735   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.508749   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508887   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.509027   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.509185   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.584389   70686 ssh_runner.go:195] Run: systemctl --version
	I0127 11:44:36.606466   70686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:44:36.746477   70686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:44:36.751936   70686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:44:36.751996   70686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:44:36.768698   70686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:44:36.768722   70686 start.go:495] detecting cgroup driver to use...
	I0127 11:44:36.768788   70686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:44:36.786842   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:44:36.799832   70686 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:44:36.799893   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:44:36.813751   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:44:36.827731   70686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:44:36.943310   70686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:44:37.088722   70686 docker.go:233] disabling docker service ...
	I0127 11:44:37.088789   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:44:37.103240   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:44:37.116205   70686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:44:37.254006   70686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:44:37.365382   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:44:37.379019   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:44:37.396330   70686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 11:44:37.396405   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.406845   70686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:44:37.406919   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.417968   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.428079   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.438133   70686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:44:37.448951   70686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:44:37.458320   70686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:44:37.458382   70686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:44:37.476279   70686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:44:37.486232   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:37.609635   70686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:44:37.703117   70686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:44:37.703185   70686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:44:37.707780   70686 start.go:563] Will wait 60s for crictl version
	I0127 11:44:37.707827   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:37.711561   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:44:37.746285   70686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:44:37.746370   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.774346   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.804220   70686 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 11:44:33.764774   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.764854   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.765730   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.805652   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:37.808777   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809130   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:37.809168   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809355   70686 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:44:37.813621   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:37.826271   70686 kubeadm.go:883] updating cluster {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:44:37.826370   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:37.826406   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:37.875128   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:37.875204   70686 ssh_runner.go:195] Run: which lz4
	I0127 11:44:37.879162   70686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:44:37.883378   70686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:44:37.883408   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:44:39.317688   70686 crio.go:462] duration metric: took 1.438551878s to copy over tarball
	I0127 11:44:39.317750   70686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:44:37.181878   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.183457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.081149   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:41.579699   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.767830   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.265799   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.264081   70686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946305063s)
	I0127 11:44:42.264109   70686 crio.go:469] duration metric: took 2.946394656s to extract the tarball
	I0127 11:44:42.264117   70686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:44:42.307411   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:42.344143   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:42.344169   70686 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:44:42.344233   70686 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.344271   70686 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.344279   70686 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.344249   70686 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.344344   70686 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.344362   70686 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:44:42.344415   70686 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.344314   70686 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.345773   70686 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.346448   70686 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.346465   70686 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.346547   70686 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.488970   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.490931   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.497125   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.504183   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.508337   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.519103   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.523858   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:44:42.600152   70686 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:44:42.600208   70686 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.600258   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629803   70686 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:44:42.629847   70686 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.629897   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629956   70686 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:44:42.629990   70686 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.630029   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656649   70686 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:44:42.656693   70686 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.656693   70686 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:44:42.656723   70686 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.656736   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656763   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.669267   70686 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:44:42.669313   70686 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.669350   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677774   70686 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:44:42.677823   70686 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:44:42.677876   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.677890   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677969   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.677987   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.678027   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.678039   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.678069   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.787131   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.787197   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.787314   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.813675   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.816360   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.816416   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.816437   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.930195   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.930298   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.930333   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.930346   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.971335   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.971389   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.971398   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:43.068772   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:44:43.068871   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:43.068882   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:44:43.068892   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:44:43.097755   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:44:43.097781   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:44:43.099343   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:44:43.116136   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:44:43.303986   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:43.439716   70686 cache_images.go:92] duration metric: took 1.095530522s to LoadCachedImages
	W0127 11:44:43.439813   70686 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 11:44:43.439832   70686 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.20.0 crio true true} ...
	I0127 11:44:43.439974   70686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570778 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:44:43.440069   70686 ssh_runner.go:195] Run: crio config
	I0127 11:44:43.491732   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:43.491754   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:43.491765   70686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:44:43.491782   70686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570778 NodeName:old-k8s-version-570778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:44:43.491897   70686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570778"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:44:43.491951   70686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:44:43.501539   70686 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:44:43.501593   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:44:43.510444   70686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 11:44:43.526994   70686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:44:43.542977   70686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:44:43.559986   70686 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 11:44:43.564089   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:43.576120   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:43.702431   70686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:44:43.719740   70686 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778 for IP: 192.168.50.193
	I0127 11:44:43.719759   70686 certs.go:194] generating shared ca certs ...
	I0127 11:44:43.719773   70686 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:43.719941   70686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:44:43.720011   70686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:44:43.720024   70686 certs.go:256] generating profile certs ...
	I0127 11:44:43.810274   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key
	I0127 11:44:43.810422   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f
	I0127 11:44:43.810480   70686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key
	I0127 11:44:43.810641   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:44:43.810684   70686 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:44:43.810697   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:44:43.810727   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:44:43.810761   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:44:43.810789   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:44:43.810838   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:43.811665   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:44:43.856247   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:44:43.898135   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:44:43.938193   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:44:43.960927   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:44:43.984028   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:44:44.008415   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:44:44.030915   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:44:44.055340   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:44:44.077556   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:44:44.101525   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:44:44.124400   70686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:44:44.140292   70686 ssh_runner.go:195] Run: openssl version
	I0127 11:44:44.145827   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:44:44.155834   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.159949   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.160022   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.165584   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:44:44.178174   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:44:44.189759   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.194947   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.195006   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.200696   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:44:44.211199   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:44:44.221194   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225257   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225297   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.230582   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:44:44.240578   70686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:44:44.245082   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:44:44.252016   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:44:44.257760   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:44:44.264902   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:44:44.270934   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:44:44.276642   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:44:44.282062   70686 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:44.282152   70686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:44:44.282190   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.318594   70686 cri.go:89] found id: ""
	I0127 11:44:44.318650   70686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:44:44.328642   70686 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:44:44.328665   70686 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:44:44.328716   70686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:44:44.337760   70686 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:44:44.338436   70686 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:44.338787   70686 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570778" cluster setting kubeconfig missing "old-k8s-version-570778" context setting]
	I0127 11:44:44.339275   70686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:44.379353   70686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:44:44.389831   70686 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0127 11:44:44.389864   70686 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:44:44.389876   70686 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:44:44.389917   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.429276   70686 cri.go:89] found id: ""
	I0127 11:44:44.429352   70686 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:44:44.446502   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:44:44.456332   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:44:44.456358   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:44:44.456406   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:44:44.465009   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:44:44.465064   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:44:44.474468   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:44:44.483271   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:44:44.483333   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:44:44.493091   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.501826   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:44:44.501887   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.511619   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:44:44.520146   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:44:44.520215   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:44:44.529284   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:44:44.538474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:44.669112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.430626   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.649318   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.747035   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.834253   70686 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:44:45.834345   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:41.682339   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.682496   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.911112   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.080526   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:44.265972   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.765113   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.334836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.834834   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.334682   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.834945   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.335112   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.834442   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.335101   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.835321   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.334868   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.835371   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.181944   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.681423   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.580901   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.079391   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:49.265367   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.765180   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.335142   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.835388   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.334604   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.835044   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.334680   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.834411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.335010   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.834554   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.181432   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.681540   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.081988   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:55.580478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:54.265141   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.265203   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.265900   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.335128   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.335140   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.835042   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.334817   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.834443   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.334777   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.835437   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.334852   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.834590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.182005   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.681494   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.079513   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.079905   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:02.080706   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.765897   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.265622   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:01.335351   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.835115   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.334828   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.834481   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.334592   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.834653   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.335201   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.834728   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.334872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.835121   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.181668   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.182704   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.681195   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:04.579620   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.079240   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.765054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.765605   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:06.335002   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:06.835393   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.334717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.835225   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.335465   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.835195   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.335007   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.835362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.334590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.835441   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.180735   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.181326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:09.079806   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.081218   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.264844   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:12.765530   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.334541   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:11.835283   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.335343   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.834836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.335067   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.834637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.334394   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.834608   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.835178   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.181440   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:14.182012   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:13.579850   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.580199   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.265832   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:17.765291   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:16.334479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.835000   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.335139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.835227   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.335309   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.835170   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.334384   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.835348   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.334845   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.835383   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.681535   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.181289   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:18.080468   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:20.579930   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.580421   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.765695   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.264793   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:21.335090   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.834734   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.335362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.834567   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.335485   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.835040   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.334533   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.834544   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.334975   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.834941   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.682460   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.181465   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:25.080118   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:27.579811   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.265167   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.265742   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.334897   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.834607   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.334771   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.335354   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.834876   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.335076   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.334594   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.834603   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.181841   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.680961   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:30.079284   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:32.079751   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.765734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.266015   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.335153   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.834967   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.335109   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.834477   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.335107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.835110   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.334563   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.835358   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.334401   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.835107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.185937   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.680940   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:35.681777   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:34.580737   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:37.080749   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.765617   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.265646   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:38.266295   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.335163   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:36.835139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.334510   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.834447   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.334776   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.834844   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.334806   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.835253   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.334905   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.834948   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.682410   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.182049   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:39.579328   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.580544   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.765177   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:43.265601   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.334866   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:41.834518   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.335359   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.834415   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.335098   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.834540   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.335306   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.834575   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.335244   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.835032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:45.835116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:45.868609   70686 cri.go:89] found id: ""
	I0127 11:45:45.868640   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.868652   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:45.868659   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:45.868718   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:45.907767   70686 cri.go:89] found id: ""
	I0127 11:45:45.907796   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.907805   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:45.907812   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:45.907870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:42.182202   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.680856   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.079255   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:46.079779   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.765111   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:47.765359   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.940736   70686 cri.go:89] found id: ""
	I0127 11:45:45.940781   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.940791   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:45.940800   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:45.940945   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:45.972511   70686 cri.go:89] found id: ""
	I0127 11:45:45.972536   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.972544   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:45.972550   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:45.972621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:46.004929   70686 cri.go:89] found id: ""
	I0127 11:45:46.004958   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.004966   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:46.004971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:46.005020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:46.037172   70686 cri.go:89] found id: ""
	I0127 11:45:46.037205   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.037217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:46.037224   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:46.037284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:46.070282   70686 cri.go:89] found id: ""
	I0127 11:45:46.070311   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.070322   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:46.070330   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:46.070387   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:46.106109   70686 cri.go:89] found id: ""
	I0127 11:45:46.106139   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.106150   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:46.106163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:46.106176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:46.147686   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:46.147719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:46.199085   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:46.199119   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:46.212487   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:46.212515   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:46.331675   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:46.331698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:46.331710   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:48.902413   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:48.915872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:48.915933   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:48.950168   70686 cri.go:89] found id: ""
	I0127 11:45:48.950215   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.950223   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:48.950229   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:48.950280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:48.981915   70686 cri.go:89] found id: ""
	I0127 11:45:48.981947   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.981958   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:48.981966   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:48.982030   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:49.022418   70686 cri.go:89] found id: ""
	I0127 11:45:49.022448   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.022461   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:49.022468   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:49.022531   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:49.066138   70686 cri.go:89] found id: ""
	I0127 11:45:49.066164   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.066174   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:49.066181   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:49.066240   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:49.107856   70686 cri.go:89] found id: ""
	I0127 11:45:49.107887   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.107895   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:49.107901   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:49.107951   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:49.158460   70686 cri.go:89] found id: ""
	I0127 11:45:49.158492   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.158519   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:49.158545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:49.158608   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:49.194805   70686 cri.go:89] found id: ""
	I0127 11:45:49.194831   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.194839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:49.194844   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:49.194889   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:49.227445   70686 cri.go:89] found id: ""
	I0127 11:45:49.227475   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.227483   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:49.227491   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:49.227502   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:49.280386   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:49.280418   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:49.293755   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:49.293785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:49.366338   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:49.366366   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:49.366381   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:49.444064   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:49.444102   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:47.182717   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:49.681160   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:48.080162   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.579311   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.580182   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.266104   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.266221   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:51.990077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:52.002185   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:52.002244   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:52.033585   70686 cri.go:89] found id: ""
	I0127 11:45:52.033608   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.033616   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:52.033622   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:52.033671   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:52.063740   70686 cri.go:89] found id: ""
	I0127 11:45:52.063766   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.063776   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:52.063784   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:52.063846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:52.098052   70686 cri.go:89] found id: ""
	I0127 11:45:52.098089   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.098115   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:52.098122   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:52.098186   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:52.130011   70686 cri.go:89] found id: ""
	I0127 11:45:52.130039   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.130048   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:52.130057   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:52.130101   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:52.163864   70686 cri.go:89] found id: ""
	I0127 11:45:52.163887   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.163894   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:52.163899   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:52.163946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:52.195990   70686 cri.go:89] found id: ""
	I0127 11:45:52.196020   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.196029   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:52.196034   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:52.196079   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:52.227747   70686 cri.go:89] found id: ""
	I0127 11:45:52.227780   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.227792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:52.227799   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:52.227860   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:52.262186   70686 cri.go:89] found id: ""
	I0127 11:45:52.262214   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.262224   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:52.262234   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:52.262249   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:52.318567   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:52.318603   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:52.332621   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:52.332646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:52.403429   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:52.403451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:52.403462   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:52.482267   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:52.482309   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.018478   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:55.032583   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:55.032655   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:55.070418   70686 cri.go:89] found id: ""
	I0127 11:45:55.070446   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.070454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:55.070460   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:55.070534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:55.102785   70686 cri.go:89] found id: ""
	I0127 11:45:55.102820   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.102831   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:55.102837   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:55.102893   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:55.140432   70686 cri.go:89] found id: ""
	I0127 11:45:55.140466   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.140477   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:55.140483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:55.140548   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:55.173071   70686 cri.go:89] found id: ""
	I0127 11:45:55.173097   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.173107   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:55.173115   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:55.173175   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:55.207834   70686 cri.go:89] found id: ""
	I0127 11:45:55.207867   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.207878   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:55.207886   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:55.207949   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:55.240758   70686 cri.go:89] found id: ""
	I0127 11:45:55.240786   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.240794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:55.240807   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:55.240852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:55.276038   70686 cri.go:89] found id: ""
	I0127 11:45:55.276067   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.276078   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:55.276085   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:55.276135   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:55.307786   70686 cri.go:89] found id: ""
	I0127 11:45:55.307818   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.307829   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:55.307841   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:55.307855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:55.384874   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:55.384908   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.425141   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:55.425169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:55.479108   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:55.479144   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:55.492988   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:55.493018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:55.557856   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:51.681649   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:53.681709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.580408   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:57.079629   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.765284   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:56.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.059727   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:58.072633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:58.072713   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:58.107460   70686 cri.go:89] found id: ""
	I0127 11:45:58.107494   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.107505   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:58.107513   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:58.107570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:58.143678   70686 cri.go:89] found id: ""
	I0127 11:45:58.143709   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.143721   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:58.143729   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:58.143794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:58.177914   70686 cri.go:89] found id: ""
	I0127 11:45:58.177942   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.177949   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:58.177957   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:58.178003   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:58.210641   70686 cri.go:89] found id: ""
	I0127 11:45:58.210679   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.210690   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:58.210698   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:58.210759   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:58.242373   70686 cri.go:89] found id: ""
	I0127 11:45:58.242408   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.242420   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:58.242427   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:58.242494   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:58.277921   70686 cri.go:89] found id: ""
	I0127 11:45:58.277954   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.277965   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:58.277973   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:58.278033   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:58.310342   70686 cri.go:89] found id: ""
	I0127 11:45:58.310373   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.310384   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:58.310391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:58.310459   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:58.345616   70686 cri.go:89] found id: ""
	I0127 11:45:58.345649   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.345660   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:58.345671   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:58.345687   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:58.380655   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:58.380680   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:58.433828   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:58.433859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:58.447666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:58.447703   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:58.510668   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:58.510698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:58.510714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:56.181754   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.682655   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.080820   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.580837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.266054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.766023   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.087242   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:01.099871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:01.099926   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:01.132252   70686 cri.go:89] found id: ""
	I0127 11:46:01.132285   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.132293   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:01.132298   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:01.132348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:01.163920   70686 cri.go:89] found id: ""
	I0127 11:46:01.163949   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.163960   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:01.163967   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:01.164034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:01.198833   70686 cri.go:89] found id: ""
	I0127 11:46:01.198858   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.198865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:01.198871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:01.198916   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:01.238722   70686 cri.go:89] found id: ""
	I0127 11:46:01.238753   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.238763   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:01.238779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:01.238844   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:01.272868   70686 cri.go:89] found id: ""
	I0127 11:46:01.272892   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.272898   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:01.272903   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:01.272947   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:01.307986   70686 cri.go:89] found id: ""
	I0127 11:46:01.308015   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.308024   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:01.308029   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:01.308082   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:01.341997   70686 cri.go:89] found id: ""
	I0127 11:46:01.342027   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.342039   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:01.342047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:01.342109   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:01.374940   70686 cri.go:89] found id: ""
	I0127 11:46:01.374968   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.374978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:01.374989   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:01.375002   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:01.428465   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:01.428500   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:01.442684   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:01.442708   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:01.512159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.512185   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:01.512198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:01.586215   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:01.586265   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.127745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:04.140798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:04.140873   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:04.175150   70686 cri.go:89] found id: ""
	I0127 11:46:04.175186   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.175197   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:04.175204   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:04.175282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:04.210697   70686 cri.go:89] found id: ""
	I0127 11:46:04.210727   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.210736   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:04.210744   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:04.210800   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:04.240777   70686 cri.go:89] found id: ""
	I0127 11:46:04.240803   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.240811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:04.240821   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:04.240865   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:04.273040   70686 cri.go:89] found id: ""
	I0127 11:46:04.273076   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.273087   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:04.273094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:04.273151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:04.308441   70686 cri.go:89] found id: ""
	I0127 11:46:04.308468   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.308478   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:04.308484   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:04.308546   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:04.346756   70686 cri.go:89] found id: ""
	I0127 11:46:04.346783   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.346793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:04.346802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:04.346870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:04.381718   70686 cri.go:89] found id: ""
	I0127 11:46:04.381747   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.381758   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:04.381766   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:04.381842   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:04.415875   70686 cri.go:89] found id: ""
	I0127 11:46:04.415913   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.415921   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:04.415930   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:04.415942   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:04.499951   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:04.499990   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.539557   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:04.539592   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:04.595977   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:04.596011   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:04.609081   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:04.609107   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:04.678937   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.181382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.681326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:05.682184   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.581478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.079382   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:04.266171   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.765288   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:07.179760   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:07.193186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:07.193259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:07.226455   70686 cri.go:89] found id: ""
	I0127 11:46:07.226487   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.226498   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:07.226507   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:07.226570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:07.259391   70686 cri.go:89] found id: ""
	I0127 11:46:07.259427   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.259439   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:07.259447   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:07.259520   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:07.295281   70686 cri.go:89] found id: ""
	I0127 11:46:07.295314   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.295326   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:07.295334   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:07.295384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:07.330145   70686 cri.go:89] found id: ""
	I0127 11:46:07.330177   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.330186   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:07.330194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:07.330260   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:07.368846   70686 cri.go:89] found id: ""
	I0127 11:46:07.368875   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.368882   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:07.368889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:07.368938   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:07.404802   70686 cri.go:89] found id: ""
	I0127 11:46:07.404832   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.404843   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:07.404851   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:07.404914   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:07.437053   70686 cri.go:89] found id: ""
	I0127 11:46:07.437081   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.437090   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:07.437096   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:07.437142   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:07.474455   70686 cri.go:89] found id: ""
	I0127 11:46:07.474482   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.474490   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:07.474498   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:07.474510   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:07.529193   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:07.529229   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:07.543329   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:07.543365   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:07.623019   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:07.623043   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:07.623057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:07.701237   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:07.701277   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:10.239258   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:10.252360   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:10.252423   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:10.288112   70686 cri.go:89] found id: ""
	I0127 11:46:10.288135   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.288143   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:10.288149   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:10.288195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:10.323260   70686 cri.go:89] found id: ""
	I0127 11:46:10.323288   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.323296   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:10.323302   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:10.323358   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:10.358662   70686 cri.go:89] found id: ""
	I0127 11:46:10.358686   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.358694   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:10.358700   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:10.358744   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:10.397231   70686 cri.go:89] found id: ""
	I0127 11:46:10.397262   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.397273   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:10.397281   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:10.397384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:10.430384   70686 cri.go:89] found id: ""
	I0127 11:46:10.430411   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.430419   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:10.430425   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:10.430490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:10.461361   70686 cri.go:89] found id: ""
	I0127 11:46:10.461387   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.461396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:10.461404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:10.461464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:10.497276   70686 cri.go:89] found id: ""
	I0127 11:46:10.497309   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.497318   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:10.497324   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:10.497389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:10.530718   70686 cri.go:89] found id: ""
	I0127 11:46:10.530751   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.530762   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:10.530772   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:10.530785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:10.578801   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:10.578839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:10.591288   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:10.591312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:10.655021   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:10.655051   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:10.655065   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:10.731115   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:10.731151   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:08.181149   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.681951   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.079678   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.079837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:12.580869   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:11.265066   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.265843   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.267173   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:13.280623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:13.280688   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:13.314325   70686 cri.go:89] found id: ""
	I0127 11:46:13.314362   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.314372   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:13.314380   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:13.314441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:13.346889   70686 cri.go:89] found id: ""
	I0127 11:46:13.346918   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.346929   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:13.346936   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:13.346989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:13.378900   70686 cri.go:89] found id: ""
	I0127 11:46:13.378929   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.378939   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:13.378945   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:13.379004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:13.412919   70686 cri.go:89] found id: ""
	I0127 11:46:13.412952   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.412963   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:13.412971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:13.413027   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:13.444222   70686 cri.go:89] found id: ""
	I0127 11:46:13.444250   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.444260   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:13.444266   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:13.444317   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:13.474180   70686 cri.go:89] found id: ""
	I0127 11:46:13.474206   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.474212   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:13.474218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:13.474277   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:13.507679   70686 cri.go:89] found id: ""
	I0127 11:46:13.507707   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.507718   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:13.507726   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:13.507785   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:13.540402   70686 cri.go:89] found id: ""
	I0127 11:46:13.540428   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.540436   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:13.540444   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:13.540454   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:13.619310   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:13.619341   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:13.659541   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:13.659568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:13.710958   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:13.710992   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:13.724362   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:13.724387   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:13.799175   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:13.181930   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.681382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.080714   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:17.580030   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.766366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.265607   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:16.299872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:16.313092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:16.313151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:16.344606   70686 cri.go:89] found id: ""
	I0127 11:46:16.344636   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.344647   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:16.344654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:16.344709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:16.378025   70686 cri.go:89] found id: ""
	I0127 11:46:16.378052   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.378060   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:16.378065   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:16.378112   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:16.409333   70686 cri.go:89] found id: ""
	I0127 11:46:16.409359   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.409366   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:16.409372   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:16.409417   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:16.440176   70686 cri.go:89] found id: ""
	I0127 11:46:16.440199   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.440207   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:16.440218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:16.440303   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:16.474293   70686 cri.go:89] found id: ""
	I0127 11:46:16.474325   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.474333   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:16.474339   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:16.474386   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:16.505778   70686 cri.go:89] found id: ""
	I0127 11:46:16.505801   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.505808   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:16.505814   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:16.505867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:16.540769   70686 cri.go:89] found id: ""
	I0127 11:46:16.540797   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.540807   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:16.540815   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:16.540870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:16.576592   70686 cri.go:89] found id: ""
	I0127 11:46:16.576620   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.576630   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:16.576640   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:16.576652   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:16.653408   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:16.653443   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:16.692433   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:16.692458   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:16.740803   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:16.740837   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:16.753287   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:16.753312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:16.826095   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.327736   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:19.340166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:19.340220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:19.371540   70686 cri.go:89] found id: ""
	I0127 11:46:19.371578   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.371591   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:19.371600   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:19.371673   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:19.404729   70686 cri.go:89] found id: ""
	I0127 11:46:19.404764   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.404774   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:19.404781   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:19.404837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:19.439789   70686 cri.go:89] found id: ""
	I0127 11:46:19.439825   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.439837   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:19.439846   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:19.439906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:19.470570   70686 cri.go:89] found id: ""
	I0127 11:46:19.470600   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.470611   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:19.470619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:19.470681   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:19.501777   70686 cri.go:89] found id: ""
	I0127 11:46:19.501805   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.501816   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:19.501824   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:19.501880   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:19.534181   70686 cri.go:89] found id: ""
	I0127 11:46:19.534210   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.534217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:19.534223   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:19.534284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:19.566593   70686 cri.go:89] found id: ""
	I0127 11:46:19.566620   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.566628   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:19.566633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:19.566693   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:19.599915   70686 cri.go:89] found id: ""
	I0127 11:46:19.599940   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.599951   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:19.599966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:19.599981   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:19.650351   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:19.650385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:19.663542   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:19.663567   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:19.734523   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.734552   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:19.734568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:19.808148   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:19.808182   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:18.181077   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.181255   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:19.580896   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.079867   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.765484   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.345687   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:22.359497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:22.359568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:22.392346   70686 cri.go:89] found id: ""
	I0127 11:46:22.392372   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.392381   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:22.392386   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:22.392443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:22.425056   70686 cri.go:89] found id: ""
	I0127 11:46:22.425081   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.425089   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:22.425093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:22.425146   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:22.460472   70686 cri.go:89] found id: ""
	I0127 11:46:22.460501   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.460512   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:22.460519   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:22.460580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:22.494621   70686 cri.go:89] found id: ""
	I0127 11:46:22.494646   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.494656   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:22.494663   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:22.494724   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:22.531878   70686 cri.go:89] found id: ""
	I0127 11:46:22.531902   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.531909   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:22.531914   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:22.531961   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:22.566924   70686 cri.go:89] found id: ""
	I0127 11:46:22.566946   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.566953   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:22.566960   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:22.567019   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:22.601357   70686 cri.go:89] found id: ""
	I0127 11:46:22.601384   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.601394   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:22.601402   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:22.601467   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:22.634574   70686 cri.go:89] found id: ""
	I0127 11:46:22.634611   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.634620   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:22.634631   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:22.634641   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:22.683998   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:22.684027   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:22.697042   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:22.697068   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:22.758991   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.759018   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:22.759034   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:22.837791   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:22.837824   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.374998   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:25.387470   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:25.387527   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:25.419525   70686 cri.go:89] found id: ""
	I0127 11:46:25.419552   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.419559   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:25.419565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:25.419637   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:25.452027   70686 cri.go:89] found id: ""
	I0127 11:46:25.452051   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.452059   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:25.452064   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:25.452111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:25.482868   70686 cri.go:89] found id: ""
	I0127 11:46:25.482899   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.482909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:25.482916   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:25.482978   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:25.513413   70686 cri.go:89] found id: ""
	I0127 11:46:25.513438   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.513447   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:25.513453   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:25.513497   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:25.544499   70686 cri.go:89] found id: ""
	I0127 11:46:25.544525   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.544534   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:25.544545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:25.544591   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:25.576649   70686 cri.go:89] found id: ""
	I0127 11:46:25.576676   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.576686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:25.576694   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:25.576749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:25.613447   70686 cri.go:89] found id: ""
	I0127 11:46:25.613476   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.613483   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:25.613489   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:25.613547   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:25.645468   70686 cri.go:89] found id: ""
	I0127 11:46:25.645492   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.645503   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:25.645513   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:25.645530   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:25.724060   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:25.724112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.758966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:25.759001   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:25.809187   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:25.809218   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:25.822532   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:25.822563   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:25.889713   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.682762   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.180989   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:24.580025   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.079771   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.265011   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.265712   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:28.390290   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:28.402720   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:28.402794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:28.433933   70686 cri.go:89] found id: ""
	I0127 11:46:28.433960   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.433971   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:28.433979   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:28.434037   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:28.465830   70686 cri.go:89] found id: ""
	I0127 11:46:28.465864   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.465874   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:28.465881   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:28.465939   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:28.497527   70686 cri.go:89] found id: ""
	I0127 11:46:28.497562   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.497570   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:28.497579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:28.497645   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:28.531270   70686 cri.go:89] found id: ""
	I0127 11:46:28.531299   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.531308   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:28.531316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:28.531371   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:28.563348   70686 cri.go:89] found id: ""
	I0127 11:46:28.563369   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.563376   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:28.563381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:28.563426   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:28.596997   70686 cri.go:89] found id: ""
	I0127 11:46:28.597020   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.597027   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:28.597032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:28.597078   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:28.631710   70686 cri.go:89] found id: ""
	I0127 11:46:28.631744   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.631756   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:28.631763   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:28.631822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:28.691511   70686 cri.go:89] found id: ""
	I0127 11:46:28.691543   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.691554   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:28.691565   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:28.691579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:28.742602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:28.742635   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:28.756184   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:28.756207   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:28.830835   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:28.830857   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:28.830868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:28.905594   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:28.905630   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:27.181377   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.682869   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.580416   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.080512   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.765386   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.766041   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.441466   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:31.453810   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:31.453884   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:31.486385   70686 cri.go:89] found id: ""
	I0127 11:46:31.486419   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.486428   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:31.486433   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:31.486486   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:31.518387   70686 cri.go:89] found id: ""
	I0127 11:46:31.518414   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.518422   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:31.518427   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:31.518487   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:31.553495   70686 cri.go:89] found id: ""
	I0127 11:46:31.553519   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.553527   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:31.553532   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:31.553585   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:31.587152   70686 cri.go:89] found id: ""
	I0127 11:46:31.587178   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.587187   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:31.587194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:31.587249   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:31.617431   70686 cri.go:89] found id: ""
	I0127 11:46:31.617459   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.617468   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:31.617474   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:31.617519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:31.651686   70686 cri.go:89] found id: ""
	I0127 11:46:31.651712   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.651720   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:31.651725   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:31.651771   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:31.684941   70686 cri.go:89] found id: ""
	I0127 11:46:31.684967   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.684977   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:31.684984   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:31.685042   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:31.718413   70686 cri.go:89] found id: ""
	I0127 11:46:31.718440   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.718451   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:31.718461   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:31.718476   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:31.767445   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:31.767470   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:31.780922   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:31.780949   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:31.846438   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:31.846462   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:31.846474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:31.926888   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:31.926923   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.465125   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:34.479852   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:34.479930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:34.511060   70686 cri.go:89] found id: ""
	I0127 11:46:34.511084   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.511093   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:34.511098   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:34.511143   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:34.544234   70686 cri.go:89] found id: ""
	I0127 11:46:34.544263   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.544269   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:34.544275   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:34.544319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:34.578776   70686 cri.go:89] found id: ""
	I0127 11:46:34.578799   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.578809   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:34.578816   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:34.578871   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:34.611130   70686 cri.go:89] found id: ""
	I0127 11:46:34.611154   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.611163   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:34.611168   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:34.611225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:34.643126   70686 cri.go:89] found id: ""
	I0127 11:46:34.643153   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.643163   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:34.643171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:34.643227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:34.678033   70686 cri.go:89] found id: ""
	I0127 11:46:34.678076   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.678087   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:34.678094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:34.678160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:34.712414   70686 cri.go:89] found id: ""
	I0127 11:46:34.712443   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.712454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:34.712461   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:34.712534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:34.745083   70686 cri.go:89] found id: ""
	I0127 11:46:34.745109   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.745116   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:34.745124   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:34.745136   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:34.757666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:34.757694   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:34.823196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:34.823218   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:34.823230   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:34.905878   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:34.905913   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.942463   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:34.942488   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:32.181312   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.181612   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.579348   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.579626   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:33.766304   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.265533   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:37.493333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:37.505875   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:37.505935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:37.538445   70686 cri.go:89] found id: ""
	I0127 11:46:37.538470   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.538478   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:37.538484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:37.538537   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:37.569576   70686 cri.go:89] found id: ""
	I0127 11:46:37.569607   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.569618   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:37.569625   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:37.569687   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:37.603340   70686 cri.go:89] found id: ""
	I0127 11:46:37.603366   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.603376   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:37.603383   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:37.603441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:37.637178   70686 cri.go:89] found id: ""
	I0127 11:46:37.637211   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.637221   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:37.637230   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:37.637294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:37.669332   70686 cri.go:89] found id: ""
	I0127 11:46:37.669359   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.669367   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:37.669373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:37.669420   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:37.701983   70686 cri.go:89] found id: ""
	I0127 11:46:37.702012   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.702021   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:37.702028   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:37.702089   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:37.734833   70686 cri.go:89] found id: ""
	I0127 11:46:37.734856   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.734865   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:37.734871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:37.734927   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:37.768113   70686 cri.go:89] found id: ""
	I0127 11:46:37.768141   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.768149   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:37.768157   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:37.768167   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:37.839883   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:37.839917   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:37.876177   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:37.876210   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:37.928640   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:37.928669   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:37.942971   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:37.942995   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:38.012611   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.514324   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:40.526994   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:40.527053   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:40.561170   70686 cri.go:89] found id: ""
	I0127 11:46:40.561192   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.561200   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:40.561205   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:40.561248   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:40.597933   70686 cri.go:89] found id: ""
	I0127 11:46:40.597964   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.597973   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:40.597981   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:40.598049   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:40.633227   70686 cri.go:89] found id: ""
	I0127 11:46:40.633255   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.633263   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:40.633287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:40.633348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:40.667332   70686 cri.go:89] found id: ""
	I0127 11:46:40.667360   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.667368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:40.667373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:40.667434   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:40.702346   70686 cri.go:89] found id: ""
	I0127 11:46:40.702372   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.702383   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:40.702391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:40.702447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:40.733890   70686 cri.go:89] found id: ""
	I0127 11:46:40.733916   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.733924   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:40.733929   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:40.733979   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:40.766986   70686 cri.go:89] found id: ""
	I0127 11:46:40.767005   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.767011   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:40.767016   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:40.767069   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:40.809290   70686 cri.go:89] found id: ""
	I0127 11:46:40.809320   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.809331   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:40.809342   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:40.809363   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:40.863970   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:40.864006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:40.886163   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:40.886188   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 11:46:36.181772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.181835   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.682630   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:39.080089   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:41.080522   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.766734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.264746   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	W0127 11:46:40.951248   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.951277   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:40.951293   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:41.025220   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:41.025251   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.562970   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:43.575475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:43.575540   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:43.614847   70686 cri.go:89] found id: ""
	I0127 11:46:43.614875   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.614885   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:43.614892   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:43.614957   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:43.651178   70686 cri.go:89] found id: ""
	I0127 11:46:43.651208   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.651219   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:43.651227   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:43.651282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:43.683752   70686 cri.go:89] found id: ""
	I0127 11:46:43.683777   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.683783   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:43.683788   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:43.683846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:43.718384   70686 cri.go:89] found id: ""
	I0127 11:46:43.718418   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.718429   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:43.718486   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:43.718557   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:43.751566   70686 cri.go:89] found id: ""
	I0127 11:46:43.751619   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.751631   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:43.751639   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:43.751701   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:43.785338   70686 cri.go:89] found id: ""
	I0127 11:46:43.785370   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.785381   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:43.785390   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:43.785453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:43.825291   70686 cri.go:89] found id: ""
	I0127 11:46:43.825320   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.825330   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:43.825337   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:43.825397   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:43.856396   70686 cri.go:89] found id: ""
	I0127 11:46:43.856422   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.856429   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:43.856437   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:43.856448   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:43.907954   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:43.907991   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:43.920963   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:43.920987   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:43.986527   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:43.986547   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:43.986562   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:44.062764   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:44.062796   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.181118   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.185722   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.080947   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.579654   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.265779   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:46.259360   69396 pod_ready.go:82] duration metric: took 4m0.000152356s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:46.259407   69396 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:46.259422   69396 pod_ready.go:39] duration metric: took 4m14.538674469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:46.259449   69396 kubeadm.go:597] duration metric: took 4m21.955300548s to restartPrimaryControlPlane
	W0127 11:46:46.259525   69396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:46.259559   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:46:46.599548   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:46.625909   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:46.625985   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:46.670285   70686 cri.go:89] found id: ""
	I0127 11:46:46.670317   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.670329   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:46.670337   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:46.670408   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:46.703591   70686 cri.go:89] found id: ""
	I0127 11:46:46.703628   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.703636   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:46.703642   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:46.703689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:46.734451   70686 cri.go:89] found id: ""
	I0127 11:46:46.734475   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.734482   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:46.734487   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:46.734539   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:46.768854   70686 cri.go:89] found id: ""
	I0127 11:46:46.768879   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.768886   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:46.768891   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:46.768937   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:46.798912   70686 cri.go:89] found id: ""
	I0127 11:46:46.798937   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.798945   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:46.798951   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:46.799009   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:46.832665   70686 cri.go:89] found id: ""
	I0127 11:46:46.832689   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.832696   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:46.832702   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:46.832751   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:46.863964   70686 cri.go:89] found id: ""
	I0127 11:46:46.863990   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.863998   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:46.864003   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:46.864064   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:46.902558   70686 cri.go:89] found id: ""
	I0127 11:46:46.902595   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.902606   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:46.902617   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:46.902632   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:46.937731   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:46.937754   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:46.986804   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:46.986839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:47.000095   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:47.000142   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:47.064072   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:47.064099   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:47.064118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:49.640691   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:49.653166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:49.653225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:49.687904   70686 cri.go:89] found id: ""
	I0127 11:46:49.687928   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.687938   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:49.687945   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:49.688000   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:49.725500   70686 cri.go:89] found id: ""
	I0127 11:46:49.725528   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.725537   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:49.725549   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:49.725610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:49.757793   70686 cri.go:89] found id: ""
	I0127 11:46:49.757823   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.757834   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:49.757841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:49.757901   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:49.789916   70686 cri.go:89] found id: ""
	I0127 11:46:49.789945   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.789955   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:49.789962   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:49.790020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:49.821431   70686 cri.go:89] found id: ""
	I0127 11:46:49.821461   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.821472   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:49.821479   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:49.821541   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:49.853511   70686 cri.go:89] found id: ""
	I0127 11:46:49.853541   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.853548   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:49.853554   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:49.853605   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:49.887197   70686 cri.go:89] found id: ""
	I0127 11:46:49.887225   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.887232   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:49.887237   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:49.887313   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:49.920423   70686 cri.go:89] found id: ""
	I0127 11:46:49.920454   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.920465   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:49.920476   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:49.920489   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:49.970455   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:49.970487   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:49.985812   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:49.985844   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:50.055494   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:50.055520   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:50.055536   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:50.134706   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:50.134743   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:47.682388   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.180618   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:48.080040   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.580505   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.580590   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.675280   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:52.690464   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:52.690545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:52.722566   70686 cri.go:89] found id: ""
	I0127 11:46:52.722600   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.722611   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:52.722621   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:52.722683   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:52.754684   70686 cri.go:89] found id: ""
	I0127 11:46:52.754710   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.754718   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:52.754723   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:52.754782   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:52.786631   70686 cri.go:89] found id: ""
	I0127 11:46:52.786659   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.786685   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:52.786691   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:52.786745   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:52.817637   70686 cri.go:89] found id: ""
	I0127 11:46:52.817664   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.817672   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:52.817681   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:52.817737   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:52.853402   70686 cri.go:89] found id: ""
	I0127 11:46:52.853428   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.853437   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:52.853443   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:52.853504   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:52.893692   70686 cri.go:89] found id: ""
	I0127 11:46:52.893720   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.893727   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:52.893733   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:52.893780   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.924897   70686 cri.go:89] found id: ""
	I0127 11:46:52.924926   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.924934   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:52.924940   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:52.924988   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:52.955377   70686 cri.go:89] found id: ""
	I0127 11:46:52.955397   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.955404   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:52.955412   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:52.955422   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:53.007489   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:53.007518   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:53.020482   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:53.020508   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:53.088456   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:53.088489   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:53.088503   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:53.161401   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:53.161432   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:55.698676   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:55.711047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:55.711104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:55.741929   70686 cri.go:89] found id: ""
	I0127 11:46:55.741952   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.741960   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:55.741965   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:55.742016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:55.773353   70686 cri.go:89] found id: ""
	I0127 11:46:55.773385   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.773394   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:55.773399   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:55.773453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:55.805262   70686 cri.go:89] found id: ""
	I0127 11:46:55.805293   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.805303   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:55.805309   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:55.805356   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:55.837444   70686 cri.go:89] found id: ""
	I0127 11:46:55.837469   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.837477   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:55.837483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:55.837554   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:55.870483   70686 cri.go:89] found id: ""
	I0127 11:46:55.870519   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.870533   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:55.870541   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:55.870603   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:55.902327   70686 cri.go:89] found id: ""
	I0127 11:46:55.902364   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.902374   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:55.902381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:55.902448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.182237   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:54.680772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:55.079977   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:56.573914   69688 pod_ready.go:82] duration metric: took 4m0.000313005s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:56.573939   69688 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:56.573958   69688 pod_ready.go:39] duration metric: took 4m9.537234596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:56.573984   69688 kubeadm.go:597] duration metric: took 4m17.786447343s to restartPrimaryControlPlane
	W0127 11:46:56.574055   69688 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:56.574078   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:46:55.936231   70686 cri.go:89] found id: ""
	I0127 11:46:55.936269   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.936279   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:55.936287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:55.936369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:55.968008   70686 cri.go:89] found id: ""
	I0127 11:46:55.968032   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.968039   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:55.968047   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:55.968057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:56.018736   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:56.018766   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:56.031397   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:56.031423   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:56.097044   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:56.097066   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:56.097079   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:56.171821   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:56.171855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:58.715327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:58.728027   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:58.728087   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:58.758672   70686 cri.go:89] found id: ""
	I0127 11:46:58.758700   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.758712   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:58.758719   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:58.758786   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:58.790220   70686 cri.go:89] found id: ""
	I0127 11:46:58.790245   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.790255   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:58.790263   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:58.790327   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:58.822188   70686 cri.go:89] found id: ""
	I0127 11:46:58.822214   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.822221   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:58.822227   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:58.822273   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:58.863053   70686 cri.go:89] found id: ""
	I0127 11:46:58.863089   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.863096   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:58.863102   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:58.863156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:58.899216   70686 cri.go:89] found id: ""
	I0127 11:46:58.899259   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.899271   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:58.899279   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:58.899338   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:58.935392   70686 cri.go:89] found id: ""
	I0127 11:46:58.935425   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.935435   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:58.935441   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:58.935503   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:58.972729   70686 cri.go:89] found id: ""
	I0127 11:46:58.972759   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.972767   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:58.972772   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:58.972823   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:59.008660   70686 cri.go:89] found id: ""
	I0127 11:46:59.008689   70686 logs.go:282] 0 containers: []
	W0127 11:46:59.008698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:59.008707   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:59.008718   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:59.063158   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:59.063199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:59.075767   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:59.075799   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:59.142382   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:59.142406   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:59.142421   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:59.223068   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:59.223100   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:56.683260   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:59.183917   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:01.760319   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:01.774202   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:01.774282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:01.817355   70686 cri.go:89] found id: ""
	I0127 11:47:01.817389   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.817401   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:01.817408   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:01.817469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:01.862960   70686 cri.go:89] found id: ""
	I0127 11:47:01.862985   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.862996   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:01.863003   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:01.863065   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:01.899900   70686 cri.go:89] found id: ""
	I0127 11:47:01.899931   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.899942   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:01.899949   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:01.900014   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:01.934687   70686 cri.go:89] found id: ""
	I0127 11:47:01.934723   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.934735   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:01.934744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:01.934809   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:01.969463   70686 cri.go:89] found id: ""
	I0127 11:47:01.969490   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.969501   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:01.969507   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:01.969578   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:02.000732   70686 cri.go:89] found id: ""
	I0127 11:47:02.000762   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.000772   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:02.000779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:02.000837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:02.035717   70686 cri.go:89] found id: ""
	I0127 11:47:02.035740   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.035748   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:02.035755   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:02.035799   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:02.073457   70686 cri.go:89] found id: ""
	I0127 11:47:02.073488   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.073498   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:02.073506   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:02.073519   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:02.142775   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:02.142800   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:02.142819   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:02.224541   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:02.224579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:02.260807   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:02.260840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:02.315983   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:02.316017   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:04.830232   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:04.844321   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:04.844380   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:04.880946   70686 cri.go:89] found id: ""
	I0127 11:47:04.880977   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.880986   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:04.880991   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:04.881066   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:04.913741   70686 cri.go:89] found id: ""
	I0127 11:47:04.913766   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.913773   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:04.913778   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:04.913831   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:04.948526   70686 cri.go:89] found id: ""
	I0127 11:47:04.948558   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.948565   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:04.948571   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:04.948621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:04.982076   70686 cri.go:89] found id: ""
	I0127 11:47:04.982102   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.982112   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:04.982119   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:04.982181   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:05.014982   70686 cri.go:89] found id: ""
	I0127 11:47:05.015007   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.015018   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:05.015025   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:05.015111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:05.048025   70686 cri.go:89] found id: ""
	I0127 11:47:05.048054   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.048065   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:05.048073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:05.048132   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:05.078464   70686 cri.go:89] found id: ""
	I0127 11:47:05.078492   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.078502   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:05.078509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:05.078584   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:05.109525   70686 cri.go:89] found id: ""
	I0127 11:47:05.109560   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.109571   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:05.109581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:05.109595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:05.157576   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:05.157608   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:05.170049   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:05.170087   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:05.239411   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:05.239433   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:05.239447   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:05.318700   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:05.318742   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:01.682086   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:04.182095   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:07.856193   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:07.870239   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:07.870310   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:07.910104   70686 cri.go:89] found id: ""
	I0127 11:47:07.910130   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.910138   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:07.910144   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:07.910189   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:07.945048   70686 cri.go:89] found id: ""
	I0127 11:47:07.945074   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.945084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:07.945092   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:07.945166   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:07.976080   70686 cri.go:89] found id: ""
	I0127 11:47:07.976111   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.976122   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:07.976128   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:07.976200   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:08.013354   70686 cri.go:89] found id: ""
	I0127 11:47:08.013388   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.013400   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:08.013407   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:08.013465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:08.045589   70686 cri.go:89] found id: ""
	I0127 11:47:08.045618   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.045626   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:08.045631   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:08.045689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:08.079539   70686 cri.go:89] found id: ""
	I0127 11:47:08.079565   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.079573   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:08.079579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:08.079650   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:08.110343   70686 cri.go:89] found id: ""
	I0127 11:47:08.110375   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.110383   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:08.110388   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:08.110447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:08.140367   70686 cri.go:89] found id: ""
	I0127 11:47:08.140398   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.140411   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:08.140422   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:08.140436   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:08.205212   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:08.205240   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:08.205255   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:08.277925   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:08.277956   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:08.314583   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:08.314609   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:08.362779   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:08.362809   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:10.876637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:10.890367   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:10.890448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:10.925658   70686 cri.go:89] found id: ""
	I0127 11:47:10.925688   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.925699   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:10.925707   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:10.925763   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:06.681477   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:08.681667   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.916547   69396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.656958711s)
	I0127 11:47:13.916611   69396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:13.933947   69396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:13.945813   69396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:13.956760   69396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:13.956784   69396 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:13.956829   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:13.967874   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:13.967928   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:13.978307   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:13.988624   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:13.988681   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:14.000424   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.012062   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:14.012123   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.021263   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:14.031880   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:14.031940   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:14.043324   69396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:14.085914   69396 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:14.085997   69396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:14.183080   69396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:14.183249   69396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:14.183394   69396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:14.195440   69396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:14.197259   69396 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:14.197356   69396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:14.197854   69396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:14.198266   69396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:14.198428   69396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:14.198787   69396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:14.200947   69396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:14.201202   69396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:14.201438   69396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:14.201742   69396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:14.201820   69396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:14.201962   69396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:14.202056   69396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:14.393335   69396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:14.578877   69396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:14.683103   69396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:14.892112   69396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:15.059210   69396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:15.059802   69396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:15.062493   69396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:10.957444   70686 cri.go:89] found id: ""
	I0127 11:47:10.957478   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.957490   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:10.957498   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:10.957561   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:10.988373   70686 cri.go:89] found id: ""
	I0127 11:47:10.988401   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.988412   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:10.988419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:10.988483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:11.019641   70686 cri.go:89] found id: ""
	I0127 11:47:11.019672   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.019683   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:11.019690   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:11.019747   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:11.051614   70686 cri.go:89] found id: ""
	I0127 11:47:11.051643   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.051654   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:11.051661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:11.051709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:11.083356   70686 cri.go:89] found id: ""
	I0127 11:47:11.083386   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.083396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:11.083404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:11.083464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:11.115324   70686 cri.go:89] found id: ""
	I0127 11:47:11.115359   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.115370   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:11.115378   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:11.115451   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:11.150953   70686 cri.go:89] found id: ""
	I0127 11:47:11.150983   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.150994   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:11.151005   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:11.151018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:11.199824   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:11.199855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:11.212841   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:11.212906   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:11.278680   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:11.278707   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:11.278726   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:11.356679   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:11.356719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:13.900662   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:13.913787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:13.913849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:13.947893   70686 cri.go:89] found id: ""
	I0127 11:47:13.947922   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.947934   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:13.947943   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:13.948001   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:13.983161   70686 cri.go:89] found id: ""
	I0127 11:47:13.983190   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.983201   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:13.983209   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:13.983264   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:14.022256   70686 cri.go:89] found id: ""
	I0127 11:47:14.022284   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.022295   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:14.022303   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:14.022354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:14.056796   70686 cri.go:89] found id: ""
	I0127 11:47:14.056830   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.056841   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:14.056848   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:14.056907   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:14.094914   70686 cri.go:89] found id: ""
	I0127 11:47:14.094941   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.094948   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:14.094954   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:14.095011   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:14.133436   70686 cri.go:89] found id: ""
	I0127 11:47:14.133463   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.133471   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:14.133477   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:14.133542   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:14.169031   70686 cri.go:89] found id: ""
	I0127 11:47:14.169062   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.169072   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:14.169078   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:14.169125   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:14.212411   70686 cri.go:89] found id: ""
	I0127 11:47:14.212435   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.212443   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:14.212452   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:14.212463   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:14.262867   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:14.262898   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:14.275105   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:14.275131   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:14.341159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:14.341190   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:14.341208   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:14.415317   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:14.415367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:11.180827   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.681189   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.682069   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.064304   69396 out.go:235]   - Booting up control plane ...
	I0127 11:47:15.064419   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:15.064539   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:15.064632   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:15.081619   69396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:15.087804   69396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:15.087864   69396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:15.215883   69396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:15.216024   69396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:15.717623   69396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.507256ms
	I0127 11:47:15.717711   69396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:20.718798   69396 kubeadm.go:310] [api-check] The API server is healthy after 5.001299318s
	I0127 11:47:20.735824   69396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:20.751647   69396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:20.776203   69396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:20.776453   69396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-273200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:20.786999   69396 kubeadm.go:310] [bootstrap-token] Using token: tjwk8y.hsba31n3brg7yicx
	I0127 11:47:16.953543   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:16.966233   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:16.966320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:17.006909   70686 cri.go:89] found id: ""
	I0127 11:47:17.006936   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.006946   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:17.006953   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:17.007008   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:17.041632   70686 cri.go:89] found id: ""
	I0127 11:47:17.041659   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.041669   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:17.041677   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:17.041731   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:17.076772   70686 cri.go:89] found id: ""
	I0127 11:47:17.076801   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.076811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:17.076818   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:17.076870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:17.112391   70686 cri.go:89] found id: ""
	I0127 11:47:17.112422   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.112433   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:17.112440   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:17.112573   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:17.148197   70686 cri.go:89] found id: ""
	I0127 11:47:17.148229   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.148247   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:17.148255   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:17.148320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:17.186840   70686 cri.go:89] found id: ""
	I0127 11:47:17.186871   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.186882   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:17.186895   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:17.186953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:17.219412   70686 cri.go:89] found id: ""
	I0127 11:47:17.219443   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.219454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:17.219463   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:17.219534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:17.256447   70686 cri.go:89] found id: ""
	I0127 11:47:17.256478   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.256488   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:17.256499   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:17.256512   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.293919   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:17.293955   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:17.342997   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:17.343028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:17.356650   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:17.356679   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:17.425809   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:17.425838   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:17.425852   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.017327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:20.034172   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:20.034239   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:20.071873   70686 cri.go:89] found id: ""
	I0127 11:47:20.071895   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.071903   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:20.071908   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:20.071955   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:20.106387   70686 cri.go:89] found id: ""
	I0127 11:47:20.106410   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.106417   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:20.106422   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:20.106481   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:20.141095   70686 cri.go:89] found id: ""
	I0127 11:47:20.141130   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.141138   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:20.141144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:20.141194   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:20.183275   70686 cri.go:89] found id: ""
	I0127 11:47:20.183302   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.183310   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:20.183316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:20.183373   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:20.217954   70686 cri.go:89] found id: ""
	I0127 11:47:20.217981   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.217991   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:20.217999   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:20.218061   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:20.262572   70686 cri.go:89] found id: ""
	I0127 11:47:20.262604   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.262616   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:20.262623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:20.262677   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:20.297951   70686 cri.go:89] found id: ""
	I0127 11:47:20.297982   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.297993   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:20.298000   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:20.298088   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:20.331854   70686 cri.go:89] found id: ""
	I0127 11:47:20.331891   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.331901   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:20.331913   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:20.331930   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:20.387238   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:20.387274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:20.409789   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:20.409823   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:20.487425   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:20.487451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:20.487464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.563923   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:20.563959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.682390   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.182895   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.788426   69396 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:20.788582   69396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:20.793089   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:20.803401   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:20.812287   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:20.816685   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:20.822172   69396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:21.128937   69396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:21.553347   69396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:22.127179   69396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:22.127210   69396 kubeadm.go:310] 
	I0127 11:47:22.127314   69396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:22.127342   69396 kubeadm.go:310] 
	I0127 11:47:22.127419   69396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:22.127428   69396 kubeadm.go:310] 
	I0127 11:47:22.127467   69396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:22.127532   69396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:22.127584   69396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:22.127594   69396 kubeadm.go:310] 
	I0127 11:47:22.127682   69396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:22.127691   69396 kubeadm.go:310] 
	I0127 11:47:22.127757   69396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:22.127768   69396 kubeadm.go:310] 
	I0127 11:47:22.127848   69396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:22.127969   69396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:22.128089   69396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:22.128103   69396 kubeadm.go:310] 
	I0127 11:47:22.128204   69396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:22.128331   69396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:22.128350   69396 kubeadm.go:310] 
	I0127 11:47:22.128485   69396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.128622   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:22.128658   69396 kubeadm.go:310] 	--control-plane 
	I0127 11:47:22.128669   69396 kubeadm.go:310] 
	I0127 11:47:22.128793   69396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:22.128805   69396 kubeadm.go:310] 
	I0127 11:47:22.128921   69396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.129015   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:22.129734   69396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:22.129770   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:47:22.129781   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:22.131454   69396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:22.132751   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:22.143934   69396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:22.162031   69396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:22.162109   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.162131   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-273200 minikube.k8s.io/updated_at=2025_01_27T11_47_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-273200 minikube.k8s.io/primary=true
	I0127 11:47:22.357159   69396 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:22.357255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.858227   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.101745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:23.115010   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:23.115068   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:23.153195   70686 cri.go:89] found id: ""
	I0127 11:47:23.153223   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.153236   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:23.153244   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:23.153311   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:23.187393   70686 cri.go:89] found id: ""
	I0127 11:47:23.187420   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.187431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:23.187437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:23.187499   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:23.220850   70686 cri.go:89] found id: ""
	I0127 11:47:23.220879   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.220888   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:23.220896   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:23.220953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:23.256597   70686 cri.go:89] found id: ""
	I0127 11:47:23.256626   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.256636   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:23.256644   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:23.256692   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:23.296324   70686 cri.go:89] found id: ""
	I0127 11:47:23.296356   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.296366   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:23.296373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:23.296436   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:23.335645   70686 cri.go:89] found id: ""
	I0127 11:47:23.335672   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.335681   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:23.335687   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:23.335733   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:23.366972   70686 cri.go:89] found id: ""
	I0127 11:47:23.366995   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.367003   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:23.367008   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:23.367062   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:23.405377   70686 cri.go:89] found id: ""
	I0127 11:47:23.405404   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.405412   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:23.405420   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:23.405433   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:23.473871   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:23.473898   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:23.473918   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:23.548827   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:23.548868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:23.584272   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:23.584302   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:23.645470   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:23.645517   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:22.681079   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:24.681767   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:23.357378   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.858261   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.358001   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.858052   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.358029   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.858255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.357827   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.545723   69396 kubeadm.go:1113] duration metric: took 4.38367816s to wait for elevateKubeSystemPrivileges
	I0127 11:47:26.545828   69396 kubeadm.go:394] duration metric: took 5m2.297374967s to StartCluster
	I0127 11:47:26.545882   69396 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.545994   69396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:26.548122   69396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.548782   69396 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:26.548545   69396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:26.548897   69396 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:26.549176   69396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-273200"
	I0127 11:47:26.549197   69396 addons.go:238] Setting addon storage-provisioner=true in "no-preload-273200"
	W0127 11:47:26.549209   69396 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:47:26.549239   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.549690   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.549730   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.549955   69396 addons.go:69] Setting default-storageclass=true in profile "no-preload-273200"
	I0127 11:47:26.549974   69396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-273200"
	I0127 11:47:26.550340   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.550368   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.550531   69396 addons.go:69] Setting metrics-server=true in profile "no-preload-273200"
	I0127 11:47:26.550551   69396 addons.go:238] Setting addon metrics-server=true in "no-preload-273200"
	W0127 11:47:26.550559   69396 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:26.550590   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550587   69396 addons.go:69] Setting dashboard=true in profile "no-preload-273200"
	I0127 11:47:26.550619   69396 addons.go:238] Setting addon dashboard=true in "no-preload-273200"
	W0127 11:47:26.550629   69396 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:26.550671   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550795   69396 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:26.550980   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551018   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.551086   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551125   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.552072   69396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:26.591135   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0127 11:47:26.591160   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0127 11:47:26.591337   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0127 11:47:26.591436   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0127 11:47:26.591962   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.591974   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592254   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592532   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592551   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592661   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592682   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592699   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592683   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.593029   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593065   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593226   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.593239   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593679   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593720   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.593787   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593821   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.596147   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.600142   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.600157   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.602457   69396 addons.go:238] Setting addon default-storageclass=true in "no-preload-273200"
	W0127 11:47:26.602479   69396 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:26.602510   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.602874   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.602914   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.604120   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.608202   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.608245   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.617629   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0127 11:47:26.618396   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.618963   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.618984   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.619363   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.619536   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.621603   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.623294   69396 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:26.625658   69396 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:26.626912   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:26.626933   69396 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:26.626955   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.630583   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0127 11:47:26.630587   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 11:47:26.631073   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.631690   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.631710   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.631883   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.632167   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.632324   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.632658   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.632673   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.633439   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.633559   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.633993   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.634505   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.634533   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.634773   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.634922   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.635051   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.635188   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.636019   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.636059   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.642473   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 11:47:26.645166   69396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:26.646249   69396 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:26.646264   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:26.646281   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.651734   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.651803   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.651826   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.651843   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.652136   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.659702   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.659915   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.663957   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0127 11:47:26.664289   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665037   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665168   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665183   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665558   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.665749   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665761   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665970   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.666585   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.666886   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.667729   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669615   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669619   69396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:24.171505   69688 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.597391159s)
	I0127 11:47:24.171597   69688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:24.187337   69688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:24.197062   69688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:24.208102   69688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:24.208127   69688 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:24.208176   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:24.223247   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:24.223306   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:24.232903   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:24.241163   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:24.241220   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:24.251669   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.260475   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:24.260534   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.269272   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:24.277509   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:24.277554   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:24.286253   69688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:24.435312   69688 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:26.669962   69396 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:26.669979   69396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:26.669998   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.670903   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:26.670919   69396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:26.670935   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.675429   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678600   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678659   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678709   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678726   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678749   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678771   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678781   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678803   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.678993   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.679036   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679128   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679182   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.679386   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.875833   69396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:26.920571   69396 node_ready.go:35] waiting up to 6m0s for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939903   69396 node_ready.go:49] node "no-preload-273200" has status "Ready":"True"
	I0127 11:47:26.939926   69396 node_ready.go:38] duration metric: took 19.319573ms for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939937   69396 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:26.959191   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:27.008467   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:27.081273   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:27.081304   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:27.101527   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:27.152011   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:27.152043   69396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:27.244718   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:27.244747   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:27.252472   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.252495   69396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:27.296605   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.313892   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:27.313920   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:27.403990   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:27.404022   69396 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:27.477781   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:27.477811   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:27.571056   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:27.571086   69396 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:27.705284   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:27.705316   69396 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:27.789319   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:27.789349   69396 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:27.870737   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:27.870774   69396 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:27.935415   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:27.935444   69396 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:27.990927   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:28.098209   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089707756s)
	I0127 11:47:28.098259   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098271   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098370   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098402   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098565   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098581   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098609   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098618   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098707   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098721   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098730   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098738   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098839   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.098925   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098945   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.099049   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.099059   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.099062   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.114073   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.114099   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.114382   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.114404   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.614645   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.317992457s)
	I0127 11:47:28.614719   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.614737   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.615709   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.615736   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.615759   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.615779   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.615792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.617426   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.617436   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.617454   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.617473   69396 addons.go:479] Verifying addon metrics-server=true in "no-preload-273200"
	I0127 11:47:28.972192   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.485321   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.494345914s)
	I0127 11:47:29.485395   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485413   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.485754   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.485774   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.485784   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.486141   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:29.486164   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.486172   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.487790   69396 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-273200 addons enable metrics-server
	
	I0127 11:47:29.489175   69396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:26.161139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:26.175269   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:26.175344   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:26.213990   70686 cri.go:89] found id: ""
	I0127 11:47:26.214019   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.214030   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:26.214038   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:26.214099   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:26.250643   70686 cri.go:89] found id: ""
	I0127 11:47:26.250672   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.250680   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:26.250685   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:26.250749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:26.289305   70686 cri.go:89] found id: ""
	I0127 11:47:26.289327   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.289336   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:26.289343   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:26.289400   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:26.327511   70686 cri.go:89] found id: ""
	I0127 11:47:26.327546   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.327557   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:26.327564   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:26.327629   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:26.363961   70686 cri.go:89] found id: ""
	I0127 11:47:26.363996   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.364011   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:26.364019   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:26.364076   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:26.403759   70686 cri.go:89] found id: ""
	I0127 11:47:26.403782   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.403793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:26.403801   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:26.403862   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:26.443391   70686 cri.go:89] found id: ""
	I0127 11:47:26.443419   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.443429   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:26.443436   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:26.443496   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:26.486086   70686 cri.go:89] found id: ""
	I0127 11:47:26.486189   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.486219   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:26.486255   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:26.486290   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:26.537761   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:26.537789   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:26.624695   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:26.624728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:26.644616   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:26.644646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:26.732815   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:26.732835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:26.732846   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:29.315744   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:29.331345   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:29.331421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:29.366233   70686 cri.go:89] found id: ""
	I0127 11:47:29.366264   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.366276   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:29.366283   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:29.366355   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:29.402282   70686 cri.go:89] found id: ""
	I0127 11:47:29.402310   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.402320   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:29.402327   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:29.402389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:29.438381   70686 cri.go:89] found id: ""
	I0127 11:47:29.438409   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.438420   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:29.438429   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:29.438483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:29.473386   70686 cri.go:89] found id: ""
	I0127 11:47:29.473408   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.473414   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:29.473419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:29.473465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:29.506930   70686 cri.go:89] found id: ""
	I0127 11:47:29.506954   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.506961   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:29.506966   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:29.507025   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:29.542763   70686 cri.go:89] found id: ""
	I0127 11:47:29.542786   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.542794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:29.542802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:29.542861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:29.578067   70686 cri.go:89] found id: ""
	I0127 11:47:29.578097   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.578108   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:29.578117   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:29.578176   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:29.613659   70686 cri.go:89] found id: ""
	I0127 11:47:29.613687   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.613698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:29.613709   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:29.613728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:29.659409   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:29.659446   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:29.718837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:29.718870   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:29.735558   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:29.735583   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:29.839999   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:29.840025   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:29.840043   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:26.683550   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.183056   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:32.285356   69688 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:32.285447   69688 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:32.285583   69688 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:32.285722   69688 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:32.285858   69688 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:32.285955   69688 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:32.287165   69688 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:32.287240   69688 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:32.287301   69688 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:32.287411   69688 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:32.287505   69688 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:32.287574   69688 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:32.287659   69688 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:32.287773   69688 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:32.287869   69688 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:32.287947   69688 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:32.288020   69688 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:32.288054   69688 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:32.288102   69688 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:32.288149   69688 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:32.288202   69688 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:32.288265   69688 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:32.288341   69688 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:32.288412   69688 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:32.288506   69688 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:32.288612   69688 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:32.290658   69688 out.go:235]   - Booting up control plane ...
	I0127 11:47:32.290754   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:32.290861   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:32.290938   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:32.291060   69688 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:32.291188   69688 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:32.291240   69688 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:32.291426   69688 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:32.291585   69688 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:32.291703   69688 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.921879ms
	I0127 11:47:32.291805   69688 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:32.291896   69688 kubeadm.go:310] [api-check] The API server is healthy after 5.007975802s
	I0127 11:47:32.292039   69688 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:32.292235   69688 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:32.292322   69688 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:32.292582   69688 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-986409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:32.292672   69688 kubeadm.go:310] [bootstrap-token] Using token: qkdn31.mmb2k0rafw3oyd5r
	I0127 11:47:32.293870   69688 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:32.294001   69688 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:32.294069   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:32.294179   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:32.294287   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:32.294412   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:32.294512   69688 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:32.294620   69688 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:32.294658   69688 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:32.294697   69688 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:32.294704   69688 kubeadm.go:310] 
	I0127 11:47:32.294752   69688 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:32.294759   69688 kubeadm.go:310] 
	I0127 11:47:32.294824   69688 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:32.294834   69688 kubeadm.go:310] 
	I0127 11:47:32.294869   69688 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:32.294927   69688 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:32.294970   69688 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:32.294976   69688 kubeadm.go:310] 
	I0127 11:47:32.295034   69688 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:32.295040   69688 kubeadm.go:310] 
	I0127 11:47:32.295078   69688 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:32.295084   69688 kubeadm.go:310] 
	I0127 11:47:32.295129   69688 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:32.295218   69688 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:32.295321   69688 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:32.295333   69688 kubeadm.go:310] 
	I0127 11:47:32.295447   69688 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:32.295574   69688 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:32.295586   69688 kubeadm.go:310] 
	I0127 11:47:32.295723   69688 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.295861   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:32.295885   69688 kubeadm.go:310] 	--control-plane 
	I0127 11:47:32.295888   69688 kubeadm.go:310] 
	I0127 11:47:32.295957   69688 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:32.295963   69688 kubeadm.go:310] 
	I0127 11:47:32.296089   69688 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.296217   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:32.296242   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:47:32.296252   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:32.297821   69688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:32.299024   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:32.311774   69688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:32.333154   69688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:32.333250   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:32.333317   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-986409 minikube.k8s.io/updated_at=2025_01_27T11_47_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=embed-certs-986409 minikube.k8s.io/primary=true
	I0127 11:47:32.373901   69688 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:32.614706   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:29.490582   69396 addons.go:514] duration metric: took 2.941688444s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:31.467084   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.115242   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:33.614855   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.114947   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.615735   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.114787   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.615277   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.708075   69688 kubeadm.go:1113] duration metric: took 3.374895681s to wait for elevateKubeSystemPrivileges
	I0127 11:47:35.708110   69688 kubeadm.go:394] duration metric: took 4m56.964886498s to StartCluster
	I0127 11:47:35.708127   69688 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.708206   69688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:35.709765   69688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.710017   69688 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:35.710099   69688 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:35.710197   69688 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-986409"
	I0127 11:47:35.710208   69688 addons.go:69] Setting default-storageclass=true in profile "embed-certs-986409"
	I0127 11:47:35.710224   69688 addons.go:69] Setting dashboard=true in profile "embed-certs-986409"
	I0127 11:47:35.710231   69688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-986409"
	I0127 11:47:35.710234   69688 addons.go:238] Setting addon dashboard=true in "embed-certs-986409"
	I0127 11:47:35.710215   69688 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-986409"
	W0127 11:47:35.710294   69688 addons.go:247] addon storage-provisioner should already be in state true
	W0127 11:47:35.710246   69688 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:35.710361   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.710231   69688 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:35.710232   69688 addons.go:69] Setting metrics-server=true in profile "embed-certs-986409"
	I0127 11:47:35.710835   69688 addons.go:238] Setting addon metrics-server=true in "embed-certs-986409"
	W0127 11:47:35.710848   69688 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:35.710878   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.711284   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711319   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711356   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711379   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711948   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.712418   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.712548   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.713403   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.713472   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.719688   69688 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:35.721496   69688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:35.730986   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
	I0127 11:47:35.731485   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.731589   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0127 11:47:35.731973   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.731990   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732030   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732378   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.732610   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32785
	I0127 11:47:35.732868   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.732886   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732943   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732985   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733025   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733170   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.733387   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.733408   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.733574   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733609   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733744   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.734292   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.734315   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.739242   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0127 11:47:35.739695   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.740240   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.740254   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.740603   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.740797   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.744403   69688 addons.go:238] Setting addon default-storageclass=true in "embed-certs-986409"
	W0127 11:47:35.744426   69688 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:35.744451   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.744823   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.744854   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.756768   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0127 11:47:35.757189   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.757717   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.757742   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.758231   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.758430   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.760526   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.762154   69688 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:35.763484   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:35.763499   69688 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:35.763517   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.766471   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.766836   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.766859   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.767027   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.767162   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.767269   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.767362   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.768736   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0127 11:47:35.769217   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.769830   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.769845   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.770259   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.770842   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.770876   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.773590   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0127 11:47:35.774146   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.774722   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.774738   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.774800   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0127 11:47:35.775433   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.775595   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.775820   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.776093   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.776103   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.776797   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.777045   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.777670   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.778791   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.779433   69688 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:35.780791   69688 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:35.782335   69688 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:32.447780   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:32.465728   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:32.465812   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:32.527859   70686 cri.go:89] found id: ""
	I0127 11:47:32.527947   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.527972   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:32.527990   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:32.528104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:32.576073   70686 cri.go:89] found id: ""
	I0127 11:47:32.576171   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.576187   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:32.576195   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:32.576290   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:32.623076   70686 cri.go:89] found id: ""
	I0127 11:47:32.623118   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.623130   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:32.623137   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:32.623225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:32.691228   70686 cri.go:89] found id: ""
	I0127 11:47:32.691318   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.691343   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:32.691362   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:32.691477   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:32.745780   70686 cri.go:89] found id: ""
	I0127 11:47:32.745811   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.745823   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:32.745831   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:32.745906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:32.789692   70686 cri.go:89] found id: ""
	I0127 11:47:32.789731   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.789741   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:32.789751   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:32.789817   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:32.826257   70686 cri.go:89] found id: ""
	I0127 11:47:32.826288   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.826299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:32.826306   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:32.826368   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:32.868284   70686 cri.go:89] found id: ""
	I0127 11:47:32.868309   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.868320   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:32.868332   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:32.868354   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:32.925073   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:32.925103   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:32.941771   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:32.941804   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:33.030670   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:33.030695   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:33.030706   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:33.113430   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:33.113464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:35.663439   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:35.680531   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:35.680611   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:35.722549   70686 cri.go:89] found id: ""
	I0127 11:47:35.722571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.722581   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:35.722589   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:35.722634   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:35.788057   70686 cri.go:89] found id: ""
	I0127 11:47:35.788078   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.788084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:35.788090   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:35.788127   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:35.833279   70686 cri.go:89] found id: ""
	I0127 11:47:35.833300   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.833308   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:35.833314   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:35.833357   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:35.874544   70686 cri.go:89] found id: ""
	I0127 11:47:35.874571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.874582   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:35.874589   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:35.874654   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:35.915199   70686 cri.go:89] found id: ""
	I0127 11:47:35.915230   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.915242   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:35.915249   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:35.915314   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:31.183154   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.184826   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.682393   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.782468   69688 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:35.782484   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:35.782515   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.783769   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:35.783786   69688 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:35.783877   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.786270   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786826   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.786854   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786891   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787046   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787077   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787232   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.787378   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.787671   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.787689   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787707   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787860   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787992   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.788077   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.793305   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0127 11:47:35.793811   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.794453   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.794473   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.794772   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.795062   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.796950   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.797253   69688 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:35.797272   69688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:35.797291   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.800329   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800750   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.800775   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800948   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.801144   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.801274   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.801417   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.954346   69688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:35.990894   69688 node_ready.go:35] waiting up to 6m0s for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021695   69688 node_ready.go:49] node "embed-certs-986409" has status "Ready":"True"
	I0127 11:47:36.021724   69688 node_ready.go:38] duration metric: took 30.797887ms for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021737   69688 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:36.029373   69688 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.075684   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:36.075765   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:36.118613   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:36.128091   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:36.128117   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:36.143161   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:36.143196   69688 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:36.167151   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:36.195969   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:36.196003   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:36.215973   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.216001   69688 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:36.279892   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:36.279930   69688 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:36.302557   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.356672   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:36.356705   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:36.403728   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:36.403755   69688 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:36.490122   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:36.490161   69688 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:36.572014   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:36.572085   69688 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:36.666239   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:36.666266   69688 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:36.784627   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:36.784652   69688 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:36.874981   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:37.244603   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077408875s)
	I0127 11:47:37.244729   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244748   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.244744   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.126101345s)
	I0127 11:47:37.244768   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244778   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246690   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246704   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246699   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246729   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246739   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246747   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246781   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246794   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246804   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246812   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.247222   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247287   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247352   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.247364   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.248606   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.248624   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281282   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.281317   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.281631   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.281653   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281654   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:33.966528   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.970381   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:36.467240   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.467270   69396 pod_ready.go:82] duration metric: took 9.508045614s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.467284   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474274   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.474309   69396 pod_ready.go:82] duration metric: took 7.015963ms for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474322   69396 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480897   69396 pod_ready.go:93] pod "etcd-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.480926   69396 pod_ready.go:82] duration metric: took 6.596204ms for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480938   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487288   69396 pod_ready.go:93] pod "kube-apiserver-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.487320   69396 pod_ready.go:82] duration metric: took 6.372473ms for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487332   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497692   69396 pod_ready.go:93] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.497721   69396 pod_ready.go:82] duration metric: took 10.381356ms for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497733   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864696   69396 pod_ready.go:93] pod "kube-proxy-mct6v" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.864728   69396 pod_ready.go:82] duration metric: took 366.98634ms for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864742   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265304   69396 pod_ready.go:93] pod "kube-scheduler-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:37.265326   69396 pod_ready.go:82] duration metric: took 400.576908ms for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265334   69396 pod_ready.go:39] duration metric: took 10.325386118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:37.265347   69396 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:37.265391   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:37.284810   69396 api_server.go:72] duration metric: took 10.735955735s to wait for apiserver process to appear ...
	I0127 11:47:37.284832   69396 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:37.284859   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:47:37.292026   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0127 11:47:37.293646   69396 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:37.293675   69396 api_server.go:131] duration metric: took 8.835297ms to wait for apiserver health ...
	I0127 11:47:37.293685   69396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:37.469184   69396 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:37.469220   69396 system_pods.go:61] "coredns-668d6bf9bc-nqskc" [a9b24f06-5dc0-4a9e-a8f4-c6f311389c62] Running
	I0127 11:47:37.469228   69396 system_pods.go:61] "coredns-668d6bf9bc-qh6rg" [05780b99-a232-4846-a4b6-111f8d3d386e] Running
	I0127 11:47:37.469234   69396 system_pods.go:61] "etcd-no-preload-273200" [d1362a7f-ee18-4157-b8df-b9a3a9372f0a] Running
	I0127 11:47:37.469240   69396 system_pods.go:61] "kube-apiserver-no-preload-273200" [32c9d6be-2aac-475a-b7ba-0414122f7c6b] Running
	I0127 11:47:37.469247   69396 system_pods.go:61] "kube-controller-manager-no-preload-273200" [1091690b-7b66-4f8d-aa90-567ff97c5c19] Running
	I0127 11:47:37.469252   69396 system_pods.go:61] "kube-proxy-mct6v" [7cd1c7e8-827a-491e-8093-a7a3afc26544] Running
	I0127 11:47:37.469257   69396 system_pods.go:61] "kube-scheduler-no-preload-273200" [fde979de-7c70-4ef8-8d23-6ed01a30bf76] Running
	I0127 11:47:37.469265   69396 system_pods.go:61] "metrics-server-f79f97bbb-z6fn6" [8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:37.469270   69396 system_pods.go:61] "storage-provisioner" [42d86701-11bb-4b1c-a522-ec9e7912d024] Running
	I0127 11:47:37.469280   69396 system_pods.go:74] duration metric: took 175.587004ms to wait for pod list to return data ...
	I0127 11:47:37.469292   69396 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:37.664628   69396 default_sa.go:45] found service account: "default"
	I0127 11:47:37.664664   69396 default_sa.go:55] duration metric: took 195.36433ms for default service account to be created ...
	I0127 11:47:37.664679   69396 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:37.868541   69396 system_pods.go:87] 9 kube-system pods found
	I0127 11:47:37.980174   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.677566724s)
	I0127 11:47:37.980228   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980244   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980560   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980582   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980592   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980601   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980880   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.980939   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980966   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980987   69688 addons.go:479] Verifying addon metrics-server=true in "embed-certs-986409"
	I0127 11:47:38.056288   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:38.999682   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.124629898s)
	I0127 11:47:38.999752   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:38.999775   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000135   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000179   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.000185   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000205   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:39.000220   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000492   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000493   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000507   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.002275   69688 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-986409 addons enable metrics-server
	
	I0127 11:47:39.003930   69688 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:35.952137   70686 cri.go:89] found id: ""
	I0127 11:47:35.952165   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.952175   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:35.952183   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:35.952247   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:35.995842   70686 cri.go:89] found id: ""
	I0127 11:47:35.995870   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.995882   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:35.995889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:35.995946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:36.045603   70686 cri.go:89] found id: ""
	I0127 11:47:36.045629   70686 logs.go:282] 0 containers: []
	W0127 11:47:36.045639   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:36.045647   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:36.045661   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:36.122919   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:36.122952   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:36.141794   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:36.141827   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:36.246196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:36.246229   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:36.246253   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:36.363333   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:36.363378   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.920333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:38.937466   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:38.937549   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:38.982630   70686 cri.go:89] found id: ""
	I0127 11:47:38.982660   70686 logs.go:282] 0 containers: []
	W0127 11:47:38.982672   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:38.982680   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:38.982741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:39.027004   70686 cri.go:89] found id: ""
	I0127 11:47:39.027034   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.027045   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:39.027052   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:39.027114   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:39.068819   70686 cri.go:89] found id: ""
	I0127 11:47:39.068841   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.068849   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:39.068854   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:39.068900   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:39.105724   70686 cri.go:89] found id: ""
	I0127 11:47:39.105758   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.105770   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:39.105779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:39.105849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:39.156156   70686 cri.go:89] found id: ""
	I0127 11:47:39.156183   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.156193   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:39.156200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:39.156257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:39.193966   70686 cri.go:89] found id: ""
	I0127 11:47:39.194002   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.194012   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:39.194021   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:39.194085   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:39.231373   70686 cri.go:89] found id: ""
	I0127 11:47:39.231398   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.231407   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:39.231415   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:39.231479   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:39.278257   70686 cri.go:89] found id: ""
	I0127 11:47:39.278288   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.278299   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:39.278309   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:39.278324   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:39.356076   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:39.356128   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:39.371224   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:39.371259   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:39.446307   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:39.446334   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:39.446350   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:39.543997   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:39.544032   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.182709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:40.681322   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:39.005168   69688 addons.go:514] duration metric: took 3.295073777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:40.536239   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:41.539907   69688 pod_ready.go:93] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:41.539938   69688 pod_ready.go:82] duration metric: took 5.510539517s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:41.539950   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046422   69688 pod_ready.go:93] pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.046450   69688 pod_ready.go:82] duration metric: took 506.490576ms for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046464   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.056999   69688 pod_ready.go:93] pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.057022   69688 pod_ready.go:82] duration metric: took 10.550413ms for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.057033   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066831   69688 pod_ready.go:93] pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.066859   69688 pod_ready.go:82] duration metric: took 9.817042ms for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066869   69688 pod_ready.go:39] duration metric: took 6.045119057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:42.066885   69688 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:42.066943   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.106914   69688 api_server.go:72] duration metric: took 6.396863225s to wait for apiserver process to appear ...
	I0127 11:47:42.106942   69688 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:42.106967   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:47:42.115128   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0127 11:47:42.116724   69688 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:42.116746   69688 api_server.go:131] duration metric: took 9.796211ms to wait for apiserver health ...
	I0127 11:47:42.116753   69688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:42.123449   69688 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:42.123472   69688 system_pods.go:61] "coredns-668d6bf9bc-9sk5f" [c6114990-b336-472e-8720-1ef5ccd3b001] Running
	I0127 11:47:42.123479   69688 system_pods.go:61] "coredns-668d6bf9bc-jvx66" [7eab12a3-7303-43fc-84fa-034ced59689b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:47:42.123486   69688 system_pods.go:61] "etcd-embed-certs-986409" [ebdc15ff-c173-440b-ae1a-c0bc983c015b] Running
	I0127 11:47:42.123491   69688 system_pods.go:61] "kube-apiserver-embed-certs-986409" [3cbf2980-e1b2-4cff-8d01-ab9ec4806976] Running
	I0127 11:47:42.123496   69688 system_pods.go:61] "kube-controller-manager-embed-certs-986409" [642b9798-c605-4987-9d0d-2481f451d943] Running
	I0127 11:47:42.123503   69688 system_pods.go:61] "kube-proxy-b82rc" [08412bee-7381-4d81-bb67-fb39fefc29bb] Running
	I0127 11:47:42.123508   69688 system_pods.go:61] "kube-scheduler-embed-certs-986409" [7774826a-ca31-4662-94db-76f6ccbf07c3] Running
	I0127 11:47:42.123516   69688 system_pods.go:61] "metrics-server-f79f97bbb-pjkmz" [4828c28f-5ef4-48ea-9360-151007c2d9be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:42.123522   69688 system_pods.go:61] "storage-provisioner" [df18a80b-cc75-49f1-bd1a-48bab4776d25] Running
	I0127 11:47:42.123530   69688 system_pods.go:74] duration metric: took 6.771018ms to wait for pod list to return data ...
	I0127 11:47:42.123541   69688 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:42.127202   69688 default_sa.go:45] found service account: "default"
	I0127 11:47:42.127219   69688 default_sa.go:55] duration metric: took 3.6724ms for default service account to be created ...
	I0127 11:47:42.127227   69688 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:42.139808   69688 system_pods.go:87] 9 kube-system pods found
	I0127 11:47:42.081513   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.095014   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:42.095074   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:42.130635   70686 cri.go:89] found id: ""
	I0127 11:47:42.130660   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.130670   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:42.130677   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:42.130741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:42.169363   70686 cri.go:89] found id: ""
	I0127 11:47:42.169394   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.169405   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:42.169415   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:42.169475   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:42.213803   70686 cri.go:89] found id: ""
	I0127 11:47:42.213831   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.213839   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:42.213849   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:42.213911   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:42.249475   70686 cri.go:89] found id: ""
	I0127 11:47:42.249505   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.249516   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:42.249524   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:42.249719   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:42.297727   70686 cri.go:89] found id: ""
	I0127 11:47:42.297753   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.297765   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:42.297770   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:42.297822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:42.340478   70686 cri.go:89] found id: ""
	I0127 11:47:42.340503   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.340513   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:42.340520   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:42.340580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:42.372922   70686 cri.go:89] found id: ""
	I0127 11:47:42.372952   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.372963   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:42.372971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:42.373029   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:42.407938   70686 cri.go:89] found id: ""
	I0127 11:47:42.407967   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.407978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:42.407989   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:42.408005   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:42.484491   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:42.484530   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:42.484553   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:42.579113   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:42.579152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:42.624076   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:42.624105   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:42.679902   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:42.679934   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.194468   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:45.207509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:45.207572   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:45.239999   70686 cri.go:89] found id: ""
	I0127 11:47:45.240028   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.240039   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:45.240046   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:45.240098   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:45.273395   70686 cri.go:89] found id: ""
	I0127 11:47:45.273422   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.273431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:45.273437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:45.273495   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:45.311168   70686 cri.go:89] found id: ""
	I0127 11:47:45.311202   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.311212   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:45.311220   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:45.311284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:45.349465   70686 cri.go:89] found id: ""
	I0127 11:47:45.349491   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.349508   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:45.349513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:45.349568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:45.385823   70686 cri.go:89] found id: ""
	I0127 11:47:45.385848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.385856   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:45.385862   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:45.385919   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:45.426563   70686 cri.go:89] found id: ""
	I0127 11:47:45.426591   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.426603   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:45.426610   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:45.426669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:45.467818   70686 cri.go:89] found id: ""
	I0127 11:47:45.467848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.467856   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:45.467861   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:45.467913   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:45.505509   70686 cri.go:89] found id: ""
	I0127 11:47:45.505551   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.505570   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:45.505581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:45.505595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:45.562102   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:45.562134   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.576502   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:45.576547   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:45.656107   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:45.656179   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:45.656200   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:45.740259   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:45.740307   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:43.182256   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:45.682893   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:48.288077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:48.305506   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:48.305575   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:48.341384   70686 cri.go:89] found id: ""
	I0127 11:47:48.341413   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.341424   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:48.341431   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:48.341490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:48.385225   70686 cri.go:89] found id: ""
	I0127 11:47:48.385256   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.385266   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:48.385273   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:48.385331   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:48.432004   70686 cri.go:89] found id: ""
	I0127 11:47:48.432026   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.432034   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:48.432039   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:48.432096   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:48.467009   70686 cri.go:89] found id: ""
	I0127 11:47:48.467037   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.467047   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:48.467054   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:48.467111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:48.503820   70686 cri.go:89] found id: ""
	I0127 11:47:48.503847   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.503858   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:48.503864   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:48.503909   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:48.538884   70686 cri.go:89] found id: ""
	I0127 11:47:48.538908   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.538915   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:48.538924   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:48.538983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:48.572744   70686 cri.go:89] found id: ""
	I0127 11:47:48.572773   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.572783   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:48.572791   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:48.572853   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:48.610043   70686 cri.go:89] found id: ""
	I0127 11:47:48.610076   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.610086   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:48.610108   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:48.610123   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:48.683427   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:48.683468   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:48.698950   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:48.698984   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:48.771789   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:48.771819   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:48.771833   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:48.852605   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:48.852642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:48.185457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:50.682230   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:51.390888   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:51.403787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:51.403867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:51.438712   70686 cri.go:89] found id: ""
	I0127 11:47:51.438739   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.438746   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:51.438752   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:51.438808   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:51.476783   70686 cri.go:89] found id: ""
	I0127 11:47:51.476811   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.476821   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:51.476829   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:51.476887   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:51.509461   70686 cri.go:89] found id: ""
	I0127 11:47:51.509505   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.509522   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:51.509534   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:51.509592   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:51.545890   70686 cri.go:89] found id: ""
	I0127 11:47:51.545918   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.545936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:51.545943   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:51.546004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:51.582831   70686 cri.go:89] found id: ""
	I0127 11:47:51.582859   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.582868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:51.582876   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:51.582935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:51.618841   70686 cri.go:89] found id: ""
	I0127 11:47:51.618866   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.618874   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:51.618880   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:51.618934   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:51.654004   70686 cri.go:89] found id: ""
	I0127 11:47:51.654037   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.654048   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:51.654055   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:51.654119   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:51.693492   70686 cri.go:89] found id: ""
	I0127 11:47:51.693525   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.693535   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:51.693547   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:51.693561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:51.742871   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:51.742901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:51.756625   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:51.756648   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:51.818231   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:51.818258   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:51.818274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:51.897522   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:51.897556   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.435357   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:54.447575   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:54.447662   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:54.481516   70686 cri.go:89] found id: ""
	I0127 11:47:54.481546   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.481557   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:54.481565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:54.481628   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:54.513468   70686 cri.go:89] found id: ""
	I0127 11:47:54.513494   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.513503   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:54.513510   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:54.513564   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:54.546743   70686 cri.go:89] found id: ""
	I0127 11:47:54.546768   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.546776   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:54.546781   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:54.546837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:54.577457   70686 cri.go:89] found id: ""
	I0127 11:47:54.577495   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.577525   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:54.577533   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:54.577604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:54.607337   70686 cri.go:89] found id: ""
	I0127 11:47:54.607366   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.607375   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:54.607381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:54.607427   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:54.651259   70686 cri.go:89] found id: ""
	I0127 11:47:54.651290   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.651301   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:54.651308   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:54.651369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:54.688579   70686 cri.go:89] found id: ""
	I0127 11:47:54.688604   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.688613   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:54.688619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:54.688678   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:54.725278   70686 cri.go:89] found id: ""
	I0127 11:47:54.725322   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.725341   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:54.725353   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:54.725367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:54.791430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:54.791452   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:54.791465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:54.868163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:54.868191   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.905354   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:54.905385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:54.957412   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:54.957444   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:53.181163   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:55.181247   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:57.471717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:57.484472   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:57.484545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:57.515302   70686 cri.go:89] found id: ""
	I0127 11:47:57.515334   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.515345   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:57.515353   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:57.515412   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:57.548214   70686 cri.go:89] found id: ""
	I0127 11:47:57.548239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.548248   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:57.548255   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:57.548316   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:57.581598   70686 cri.go:89] found id: ""
	I0127 11:47:57.581624   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.581632   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:57.581637   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:57.581682   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:57.617610   70686 cri.go:89] found id: ""
	I0127 11:47:57.617642   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.617654   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:57.617661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:57.617726   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:57.650213   70686 cri.go:89] found id: ""
	I0127 11:47:57.650239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.650246   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:57.650252   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:57.650319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:57.688111   70686 cri.go:89] found id: ""
	I0127 11:47:57.688132   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.688142   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:57.688150   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:57.688197   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:57.720752   70686 cri.go:89] found id: ""
	I0127 11:47:57.720782   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.720792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:57.720798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:57.720845   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:57.751896   70686 cri.go:89] found id: ""
	I0127 11:47:57.751925   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.751936   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:57.751946   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:57.751959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:57.802765   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:57.802797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:57.815299   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:57.815323   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:57.878584   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:57.878612   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:57.878627   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.954926   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:57.954961   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.492831   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:00.505398   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:00.505458   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:00.541546   70686 cri.go:89] found id: ""
	I0127 11:48:00.541572   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.541583   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:00.541590   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:00.541658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:00.574543   70686 cri.go:89] found id: ""
	I0127 11:48:00.574575   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.574585   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:00.574596   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:00.574658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:00.607826   70686 cri.go:89] found id: ""
	I0127 11:48:00.607855   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.607865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:00.607872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:00.607931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:00.642893   70686 cri.go:89] found id: ""
	I0127 11:48:00.642925   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.642936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:00.642944   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:00.642997   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:00.675525   70686 cri.go:89] found id: ""
	I0127 11:48:00.675549   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.675557   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:00.675563   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:00.675642   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:00.708878   70686 cri.go:89] found id: ""
	I0127 11:48:00.708913   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.708921   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:00.708926   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:00.708971   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:00.740471   70686 cri.go:89] found id: ""
	I0127 11:48:00.740505   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.740512   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:00.740518   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:00.740568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:00.776050   70686 cri.go:89] found id: ""
	I0127 11:48:00.776078   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.776088   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:00.776099   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:00.776112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:00.789429   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:00.789465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:00.855134   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:00.855159   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:00.855176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.684463   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:59.175404   70237 pod_ready.go:82] duration metric: took 4m0.000243677s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" ...
	E0127 11:47:59.175451   70237 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:47:59.175501   70237 pod_ready.go:39] duration metric: took 4m10.536256424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:59.175547   70237 kubeadm.go:597] duration metric: took 4m18.512037331s to restartPrimaryControlPlane
	W0127 11:47:59.175647   70237 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:47:59.175705   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:00.932863   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:00.932910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.969770   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:00.969797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.521596   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:03.536040   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:03.536171   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:03.571013   70686 cri.go:89] found id: ""
	I0127 11:48:03.571046   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.571057   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:03.571065   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:03.571128   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:03.605846   70686 cri.go:89] found id: ""
	I0127 11:48:03.605871   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.605879   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:03.605885   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:03.605931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:03.641481   70686 cri.go:89] found id: ""
	I0127 11:48:03.641515   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.641524   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:03.641529   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:03.641595   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:03.676290   70686 cri.go:89] found id: ""
	I0127 11:48:03.676316   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.676326   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:03.676333   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:03.676395   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:03.713213   70686 cri.go:89] found id: ""
	I0127 11:48:03.713235   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.713243   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:03.713248   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:03.713337   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:03.746114   70686 cri.go:89] found id: ""
	I0127 11:48:03.746141   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.746151   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:03.746158   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:03.746217   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:03.780250   70686 cri.go:89] found id: ""
	I0127 11:48:03.780289   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.780299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:03.780307   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:03.780354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:03.817856   70686 cri.go:89] found id: ""
	I0127 11:48:03.817885   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.817896   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:03.817907   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:03.817921   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:03.898728   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:03.898779   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:03.935189   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:03.935222   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.990903   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:03.990946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:04.004559   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:04.004584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:04.078588   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:06.578765   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:06.592073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:06.592134   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:06.624430   70686 cri.go:89] found id: ""
	I0127 11:48:06.624465   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.624476   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:06.624484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:06.624555   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:06.677207   70686 cri.go:89] found id: ""
	I0127 11:48:06.677244   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.677257   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:06.677264   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:06.677346   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:06.718809   70686 cri.go:89] found id: ""
	I0127 11:48:06.718833   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.718840   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:06.718845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:06.718890   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:06.754041   70686 cri.go:89] found id: ""
	I0127 11:48:06.754076   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.754089   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:06.754100   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:06.754160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:06.785748   70686 cri.go:89] found id: ""
	I0127 11:48:06.785776   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.785788   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:06.785795   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:06.785854   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:06.819849   70686 cri.go:89] found id: ""
	I0127 11:48:06.819872   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.819879   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:06.819884   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:06.819930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:06.853347   70686 cri.go:89] found id: ""
	I0127 11:48:06.853372   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.853381   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:06.853387   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:06.853438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:06.885714   70686 cri.go:89] found id: ""
	I0127 11:48:06.885740   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.885747   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:06.885755   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:06.885765   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:06.921805   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:06.921832   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:06.974607   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:06.974638   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:06.987566   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:06.987625   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:07.056872   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:07.056892   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:07.056905   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:09.644164   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:09.657446   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:09.657519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:09.696908   70686 cri.go:89] found id: ""
	I0127 11:48:09.696940   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.696950   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:09.696957   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:09.697016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:09.729636   70686 cri.go:89] found id: ""
	I0127 11:48:09.729665   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.729675   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:09.729682   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:09.729742   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:09.769699   70686 cri.go:89] found id: ""
	I0127 11:48:09.769726   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.769734   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:09.769740   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:09.769791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:09.801315   70686 cri.go:89] found id: ""
	I0127 11:48:09.801360   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.801368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:09.801374   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:09.801432   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:09.831170   70686 cri.go:89] found id: ""
	I0127 11:48:09.831212   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.831221   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:09.831226   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:09.831294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:09.862163   70686 cri.go:89] found id: ""
	I0127 11:48:09.862188   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.862194   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:09.862200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:09.862262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:09.893097   70686 cri.go:89] found id: ""
	I0127 11:48:09.893125   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.893136   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:09.893144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:09.893201   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:09.924215   70686 cri.go:89] found id: ""
	I0127 11:48:09.924247   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.924259   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:09.924269   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:09.924286   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:09.990827   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:09.990849   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:09.990859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:10.063335   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:10.063366   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:10.099158   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:10.099199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:10.150789   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:10.150821   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:12.664524   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:12.677711   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:12.677791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:12.710353   70686 cri.go:89] found id: ""
	I0127 11:48:12.710377   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.710384   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:12.710389   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:12.710443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:12.743545   70686 cri.go:89] found id: ""
	I0127 11:48:12.743572   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.743579   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:12.743584   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:12.743646   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:12.775386   70686 cri.go:89] found id: ""
	I0127 11:48:12.775413   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.775423   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:12.775430   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:12.775488   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:12.808803   70686 cri.go:89] found id: ""
	I0127 11:48:12.808828   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.808835   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:12.808841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:12.808898   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:12.842531   70686 cri.go:89] found id: ""
	I0127 11:48:12.842554   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.842561   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:12.842566   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:12.842610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:12.875470   70686 cri.go:89] found id: ""
	I0127 11:48:12.875501   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.875512   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:12.875522   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:12.875579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:12.908768   70686 cri.go:89] found id: ""
	I0127 11:48:12.908790   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.908797   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:12.908802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:12.908848   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:12.943312   70686 cri.go:89] found id: ""
	I0127 11:48:12.943340   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.943348   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:12.943356   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:12.943368   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:12.995939   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:12.995971   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:13.009006   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:13.009028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:13.097589   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:13.097607   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:13.097618   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:13.180494   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:13.180526   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:15.719725   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:15.733707   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:15.733770   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:15.771051   70686 cri.go:89] found id: ""
	I0127 11:48:15.771076   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.771086   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:15.771094   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:15.771156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:15.803893   70686 cri.go:89] found id: ""
	I0127 11:48:15.803926   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.803938   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:15.803945   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:15.803995   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:15.840882   70686 cri.go:89] found id: ""
	I0127 11:48:15.840915   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.840927   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:15.840935   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:15.840993   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:15.879101   70686 cri.go:89] found id: ""
	I0127 11:48:15.879132   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.879144   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:15.879165   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:15.879227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:15.910272   70686 cri.go:89] found id: ""
	I0127 11:48:15.910306   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.910317   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:15.910325   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:15.910385   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:15.942060   70686 cri.go:89] found id: ""
	I0127 11:48:15.942085   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.942093   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:15.942099   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:15.942160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:15.975105   70686 cri.go:89] found id: ""
	I0127 11:48:15.975136   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.975147   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:15.975155   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:15.975219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:16.009270   70686 cri.go:89] found id: ""
	I0127 11:48:16.009302   70686 logs.go:282] 0 containers: []
	W0127 11:48:16.009313   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:16.009323   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:16.009337   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:16.059868   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:16.059901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:16.074089   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:16.074118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:16.150389   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:16.150435   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:16.150450   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:16.226031   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:16.226070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:18.766131   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:18.780688   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:18.780758   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:18.827413   70686 cri.go:89] found id: ""
	I0127 11:48:18.827443   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.827454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:18.827462   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:18.827528   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:18.890142   70686 cri.go:89] found id: ""
	I0127 11:48:18.890169   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.890179   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:18.890187   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:18.890252   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:18.921896   70686 cri.go:89] found id: ""
	I0127 11:48:18.921925   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.921933   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:18.921938   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:18.921989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:18.956705   70686 cri.go:89] found id: ""
	I0127 11:48:18.956728   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.956736   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:18.956744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:18.956813   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:18.989832   70686 cri.go:89] found id: ""
	I0127 11:48:18.989858   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.989868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:18.989874   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:18.989929   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:19.026132   70686 cri.go:89] found id: ""
	I0127 11:48:19.026159   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.026166   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:19.026173   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:19.026219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:19.059138   70686 cri.go:89] found id: ""
	I0127 11:48:19.059162   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.059170   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:19.059175   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:19.059220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:19.092018   70686 cri.go:89] found id: ""
	I0127 11:48:19.092048   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.092058   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:19.092069   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:19.092085   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:19.167121   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:19.167152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:19.205334   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:19.205364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:19.254602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:19.254639   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:19.268979   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:19.269006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:19.338679   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:21.839791   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:21.852667   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:21.852727   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:21.886171   70686 cri.go:89] found id: ""
	I0127 11:48:21.886197   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.886205   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:21.886210   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:21.886257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:21.921652   70686 cri.go:89] found id: ""
	I0127 11:48:21.921679   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.921689   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:21.921696   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:21.921755   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:21.957643   70686 cri.go:89] found id: ""
	I0127 11:48:21.957670   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.957679   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:21.957686   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:21.957746   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:21.992841   70686 cri.go:89] found id: ""
	I0127 11:48:21.992871   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.992881   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:21.992888   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:21.992952   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:22.028313   70686 cri.go:89] found id: ""
	I0127 11:48:22.028356   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.028365   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:22.028376   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:22.028421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:22.063653   70686 cri.go:89] found id: ""
	I0127 11:48:22.063679   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.063686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:22.063692   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:22.063749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:22.095804   70686 cri.go:89] found id: ""
	I0127 11:48:22.095831   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.095839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:22.095845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:22.095904   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:22.128161   70686 cri.go:89] found id: ""
	I0127 11:48:22.128194   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.128205   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:22.128217   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:22.128231   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:22.166325   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:22.166348   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:22.216549   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:22.216599   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:22.229716   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:22.229745   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:22.295957   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:22.295985   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:22.296000   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:24.876705   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:24.889666   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:24.889741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:24.923871   70686 cri.go:89] found id: ""
	I0127 11:48:24.923904   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.923915   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:24.923923   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:24.923983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:24.959046   70686 cri.go:89] found id: ""
	I0127 11:48:24.959078   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.959090   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:24.959098   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:24.959151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:24.994427   70686 cri.go:89] found id: ""
	I0127 11:48:24.994457   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.994468   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:24.994475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:24.994535   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:25.026201   70686 cri.go:89] found id: ""
	I0127 11:48:25.026230   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.026239   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:25.026247   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:25.026309   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:25.058228   70686 cri.go:89] found id: ""
	I0127 11:48:25.058250   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.058258   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:25.058263   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:25.058319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:25.089128   70686 cri.go:89] found id: ""
	I0127 11:48:25.089165   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.089176   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:25.089186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:25.089262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:25.124376   70686 cri.go:89] found id: ""
	I0127 11:48:25.124404   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.124411   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:25.124417   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:25.124464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:25.157926   70686 cri.go:89] found id: ""
	I0127 11:48:25.157959   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.157970   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:25.157982   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:25.157996   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:25.208316   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:25.208347   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:25.223045   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:25.223070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:25.289735   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:25.289757   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:25.289771   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:25.376030   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:25.376082   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:27.914186   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:27.926651   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:27.926716   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:27.965235   70686 cri.go:89] found id: ""
	I0127 11:48:27.965263   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.965273   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:27.965279   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:27.965334   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:27.999266   70686 cri.go:89] found id: ""
	I0127 11:48:27.999301   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.999312   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:27.999320   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:27.999377   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:28.031394   70686 cri.go:89] found id: ""
	I0127 11:48:28.031442   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.031454   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:28.031462   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:28.031524   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:28.063460   70686 cri.go:89] found id: ""
	I0127 11:48:28.063494   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.063505   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:28.063513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:28.063579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:28.098052   70686 cri.go:89] found id: ""
	I0127 11:48:28.098075   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.098082   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:28.098087   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:28.098133   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:28.132561   70686 cri.go:89] found id: ""
	I0127 11:48:28.132592   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.132601   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:28.132609   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:28.132668   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:28.173166   70686 cri.go:89] found id: ""
	I0127 11:48:28.173197   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.173206   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:28.173212   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:28.173263   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:28.207104   70686 cri.go:89] found id: ""
	I0127 11:48:28.207134   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.207144   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:28.207155   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:28.207169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:28.255860   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:28.255897   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:28.270823   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:28.270849   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:28.340536   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:28.340562   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:28.340577   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:28.424875   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:28.424910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:26.746474   70237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.570747097s)
	I0127 11:48:26.746545   70237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:26.762637   70237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:26.776063   70237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:26.789742   70237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:26.789766   70237 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:26.789818   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:48:26.800449   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:26.800505   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:26.818106   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:48:26.827104   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:26.827167   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:26.844719   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.861215   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:26.861299   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.877899   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:48:26.886638   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:26.886691   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:26.895347   70237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:27.038970   70237 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:34.381659   70237 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:48:34.381747   70237 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:48:34.381834   70237 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:48:34.382006   70237 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:48:34.382166   70237 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:48:34.382273   70237 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:48:34.384155   70237 out.go:235]   - Generating certificates and keys ...
	I0127 11:48:34.384281   70237 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:48:34.384383   70237 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:48:34.384475   70237 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:48:34.384540   70237 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:48:34.384619   70237 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:48:34.384712   70237 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:48:34.384815   70237 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:48:34.384870   70237 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:48:34.384936   70237 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:48:34.385045   70237 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:48:34.385125   70237 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:48:34.385205   70237 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:48:34.385276   70237 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:48:34.385331   70237 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:48:34.385408   70237 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:48:34.385500   70237 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:48:34.385576   70237 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:48:34.385691   70237 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:48:34.385779   70237 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:48:34.387105   70237 out.go:235]   - Booting up control plane ...
	I0127 11:48:34.387208   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:48:34.387285   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:48:34.387359   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:48:34.387457   70237 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:48:34.387545   70237 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:48:34.387589   70237 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:48:34.387728   70237 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:48:34.387818   70237 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:48:34.387875   70237 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001607262s
	I0127 11:48:34.387947   70237 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:48:34.388039   70237 kubeadm.go:310] [api-check] The API server is healthy after 4.002263796s
	I0127 11:48:34.388196   70237 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:48:34.388338   70237 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:48:34.388399   70237 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:48:34.388623   70237 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-407489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:48:34.388706   70237 kubeadm.go:310] [bootstrap-token] Using token: n96bmw.dtq43nz27fzxgr8y
	I0127 11:48:34.390189   70237 out.go:235]   - Configuring RBAC rules ...
	I0127 11:48:34.390316   70237 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:48:34.390409   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:48:34.390579   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:48:34.390756   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:48:34.390876   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:48:34.390986   70237 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:48:34.391159   70237 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:48:34.391231   70237 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:48:34.391299   70237 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:48:34.391310   70237 kubeadm.go:310] 
	I0127 11:48:34.391403   70237 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:48:34.391413   70237 kubeadm.go:310] 
	I0127 11:48:34.391518   70237 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:48:34.391530   70237 kubeadm.go:310] 
	I0127 11:48:34.391577   70237 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:48:34.391699   70237 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:48:34.391769   70237 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:48:34.391776   70237 kubeadm.go:310] 
	I0127 11:48:34.391868   70237 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:48:34.391882   70237 kubeadm.go:310] 
	I0127 11:48:34.391943   70237 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:48:34.391952   70237 kubeadm.go:310] 
	I0127 11:48:34.392024   70237 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:48:34.392099   70237 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:48:34.392204   70237 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:48:34.392219   70237 kubeadm.go:310] 
	I0127 11:48:34.392359   70237 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:48:34.392465   70237 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:48:34.392480   70237 kubeadm.go:310] 
	I0127 11:48:34.392616   70237 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.392829   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:48:34.392944   70237 kubeadm.go:310] 	--control-plane 
	I0127 11:48:34.392963   70237 kubeadm.go:310] 
	I0127 11:48:34.393089   70237 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:48:34.393100   70237 kubeadm.go:310] 
	I0127 11:48:34.393184   70237 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.393325   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:48:34.393340   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:48:34.393350   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:48:34.394995   70237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:48:30.970758   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:30.987346   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:30.987422   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:31.022870   70686 cri.go:89] found id: ""
	I0127 11:48:31.022900   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.022911   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:31.022919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:31.022980   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:31.056491   70686 cri.go:89] found id: ""
	I0127 11:48:31.056519   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.056529   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:31.056537   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:31.056593   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:31.091268   70686 cri.go:89] found id: ""
	I0127 11:48:31.091301   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.091313   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:31.091320   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:31.091378   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:31.124449   70686 cri.go:89] found id: ""
	I0127 11:48:31.124479   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.124489   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:31.124497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:31.124565   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:31.167383   70686 cri.go:89] found id: ""
	I0127 11:48:31.167410   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.167418   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:31.167424   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:31.167473   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:31.205066   70686 cri.go:89] found id: ""
	I0127 11:48:31.205165   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.205185   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:31.205194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:31.205265   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:31.242101   70686 cri.go:89] found id: ""
	I0127 11:48:31.242132   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.242144   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:31.242151   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:31.242208   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:31.278496   70686 cri.go:89] found id: ""
	I0127 11:48:31.278595   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.278610   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:31.278622   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:31.278645   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:31.366805   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:31.366835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:31.366851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:31.445608   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:31.445642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:31.487502   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:31.487529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:31.566139   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:31.566171   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.080397   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:34.094121   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:34.094187   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:34.131591   70686 cri.go:89] found id: ""
	I0127 11:48:34.131635   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.131646   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:34.131654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:34.131711   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:34.167143   70686 cri.go:89] found id: ""
	I0127 11:48:34.167175   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.167185   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:34.167192   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:34.167259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:34.203241   70686 cri.go:89] found id: ""
	I0127 11:48:34.203270   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.203283   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:34.203290   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:34.203349   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:34.238023   70686 cri.go:89] found id: ""
	I0127 11:48:34.238053   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.238061   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:34.238067   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:34.238115   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:34.273362   70686 cri.go:89] found id: ""
	I0127 11:48:34.273388   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.273398   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:34.273406   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:34.273469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:34.310047   70686 cri.go:89] found id: ""
	I0127 11:48:34.310073   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.310084   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:34.310092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:34.310148   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:34.346880   70686 cri.go:89] found id: ""
	I0127 11:48:34.346914   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.346924   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:34.346932   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:34.346987   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:34.382306   70686 cri.go:89] found id: ""
	I0127 11:48:34.382327   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.382339   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:34.382348   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:34.382364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:34.494656   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:34.494697   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:34.541974   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:34.542009   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:34.619534   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:34.619584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.634607   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:34.634631   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:34.705419   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:34.396212   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:48:34.408954   70237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:48:34.431113   70237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:48:34.431252   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:34.431257   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-407489 minikube.k8s.io/updated_at=2025_01_27T11_48_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=default-k8s-diff-port-407489 minikube.k8s.io/primary=true
	I0127 11:48:34.469468   70237 ops.go:34] apiserver oom_adj: -16
	I0127 11:48:34.666106   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.167035   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.667149   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.167156   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.666148   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.167090   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.667139   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.166714   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.666209   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.166966   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.353909   70237 kubeadm.go:1113] duration metric: took 4.922724686s to wait for elevateKubeSystemPrivileges
	I0127 11:48:39.353963   70237 kubeadm.go:394] duration metric: took 4m58.742572387s to StartCluster
	I0127 11:48:39.353997   70237 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.354112   70237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:48:39.356217   70237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.356516   70237 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:48:39.356640   70237 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:48:39.356750   70237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356786   70237 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356793   70237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356805   70237 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356806   70237 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356812   70237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-407489"
	W0127 11:48:39.356815   70237 addons.go:247] addon metrics-server should already be in state true
	W0127 11:48:39.356814   70237 addons.go:247] addon dashboard should already be in state true
	W0127 11:48:39.356785   70237 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356919   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356780   70237 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.357367   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357421   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357452   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357461   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357470   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357481   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357489   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357427   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.358335   70237 out.go:177] * Verifying Kubernetes components...
	I0127 11:48:39.359875   70237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:48:39.375814   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0127 11:48:39.376161   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0127 11:48:39.376320   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376584   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376816   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376834   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.376964   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376976   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.377329   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.377542   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.377878   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.378406   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.378448   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.378664   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0127 11:48:39.378707   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0127 11:48:39.379469   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.379520   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.380020   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.380031   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.380391   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.380901   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.380937   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.381376   70237 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-407489"
	W0127 11:48:39.381392   70237 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:48:39.381420   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.381774   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.381828   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.382425   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.382444   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.382932   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.383472   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.383515   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.399683   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0127 11:48:39.400302   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.400882   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.400901   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.401296   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.401495   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0127 11:48:39.401654   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0127 11:48:39.401894   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.401947   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402556   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402892   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402909   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.402980   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402997   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.403362   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0127 11:48:39.403805   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.403823   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.404268   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.404296   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.404472   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.404848   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.404929   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.405710   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.405726   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.406261   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.406477   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.406675   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.407171   70237 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:48:39.408344   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.408427   70237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:48:39.409688   70237 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:48:39.409753   70237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:48:37.206052   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:37.219444   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:37.219530   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:37.254304   70686 cri.go:89] found id: ""
	I0127 11:48:37.254334   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.254342   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:37.254349   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:37.254409   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:37.291229   70686 cri.go:89] found id: ""
	I0127 11:48:37.291264   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.291276   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:37.291289   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:37.291353   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:37.329358   70686 cri.go:89] found id: ""
	I0127 11:48:37.329381   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.329389   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:37.329394   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:37.329439   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:37.368500   70686 cri.go:89] found id: ""
	I0127 11:48:37.368529   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.368537   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:37.368543   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:37.368604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:37.400175   70686 cri.go:89] found id: ""
	I0127 11:48:37.400203   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.400213   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:37.400221   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:37.400284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:37.432661   70686 cri.go:89] found id: ""
	I0127 11:48:37.432687   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.432697   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:37.432704   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:37.432762   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:37.464843   70686 cri.go:89] found id: ""
	I0127 11:48:37.464886   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.464897   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:37.464905   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:37.464970   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:37.501795   70686 cri.go:89] found id: ""
	I0127 11:48:37.501818   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.501826   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:37.501835   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:37.501845   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:37.580256   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:37.580281   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:37.580297   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:37.658741   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:37.658790   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:37.701171   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:37.701198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:37.761906   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:37.761941   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.280848   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:40.294890   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:40.294962   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:40.333860   70686 cri.go:89] found id: ""
	I0127 11:48:40.333885   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.333904   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:40.333919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:40.333983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:40.377039   70686 cri.go:89] found id: ""
	I0127 11:48:40.377072   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.377083   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:40.377093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:40.377157   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:40.413874   70686 cri.go:89] found id: ""
	I0127 11:48:40.413899   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.413909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:40.413915   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:40.413976   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:40.453270   70686 cri.go:89] found id: ""
	I0127 11:48:40.453302   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.453313   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:40.453322   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:40.453438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:40.495704   70686 cri.go:89] found id: ""
	I0127 11:48:40.495739   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.495750   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:40.495759   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:40.495825   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:40.541078   70686 cri.go:89] found id: ""
	I0127 11:48:40.541117   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.541128   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:40.541135   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:40.541195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:40.577161   70686 cri.go:89] found id: ""
	I0127 11:48:40.577190   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.577201   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:40.577207   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:40.577267   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:40.611784   70686 cri.go:89] found id: ""
	I0127 11:48:40.611815   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.611825   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:40.611837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:40.611851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.627400   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:40.627429   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:40.697583   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:40.697609   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:40.697624   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:40.779493   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:40.779529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:40.829083   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:40.829117   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:39.409927   70237 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.409949   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:48:39.409969   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410883   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:48:39.410891   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:48:39.410900   70237 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:48:39.410901   70237 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.414712   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415032   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415363   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415380   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415508   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415557   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.415793   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415795   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.415811   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415965   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416023   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416188   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.416193   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416207   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.416226   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.416326   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416464   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416647   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416856   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.417093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.417232   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.425335   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0127 11:48:39.425726   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.426147   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.426164   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.426496   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.426691   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.428519   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.428734   70237 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.428750   70237 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:48:39.428767   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.431736   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.431955   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.431979   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.432148   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.432352   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.432522   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.432669   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.622216   70237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:48:39.650134   70237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677286   70237 node_ready.go:49] node "default-k8s-diff-port-407489" has status "Ready":"True"
	I0127 11:48:39.677309   70237 node_ready.go:38] duration metric: took 27.135622ms for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677318   70237 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:39.687667   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:39.731665   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.746831   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.793916   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:48:39.793939   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:48:39.875140   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:48:39.875167   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:48:39.930947   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:48:39.930970   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:48:39.943793   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:48:39.943816   70237 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:48:39.993962   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:48:39.993993   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:48:40.041925   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:48:40.041962   70237 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:48:40.045715   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:48:40.045733   70237 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:48:40.168240   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:48:40.168261   70237 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:48:40.170308   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.170329   70237 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:48:40.222208   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:48:40.222229   70237 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:48:40.226028   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.312875   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:48:40.312990   70237 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:48:40.389058   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.389088   70237 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:48:40.437979   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.764016   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017148966s)
	I0127 11:48:40.764080   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764098   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032393238s)
	I0127 11:48:40.764145   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764163   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764466   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764476   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:40.764483   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764520   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764535   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764525   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764555   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764564   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764785   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764804   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764924   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.781921   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.781947   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.782236   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.782254   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294495   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.068429548s)
	I0127 11:48:41.294547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294560   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.294909   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.294914   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.294937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294945   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294952   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.295173   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.295220   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.295238   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.295255   70237 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:41.723523   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:41.929362   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.491326001s)
	I0127 11:48:41.929422   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929437   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.929779   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.929797   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.929815   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929825   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.930103   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.930125   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.930151   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.931487   70237 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-407489 addons enable metrics-server
	
	I0127 11:48:41.933107   70237 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:48:43.382411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:43.399629   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:43.399702   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:43.433083   70686 cri.go:89] found id: ""
	I0127 11:48:43.433116   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.433127   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:43.433134   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:43.433207   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:43.471725   70686 cri.go:89] found id: ""
	I0127 11:48:43.471756   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.471788   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:43.471796   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:43.471861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:43.505911   70686 cri.go:89] found id: ""
	I0127 11:48:43.505944   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.505956   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:43.505964   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:43.506034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:43.545670   70686 cri.go:89] found id: ""
	I0127 11:48:43.545705   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.545715   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:43.545723   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:43.545773   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:43.588086   70686 cri.go:89] found id: ""
	I0127 11:48:43.588113   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.588124   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:43.588131   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:43.588193   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:43.626703   70686 cri.go:89] found id: ""
	I0127 11:48:43.626739   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.626747   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:43.626754   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:43.626810   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:43.666123   70686 cri.go:89] found id: ""
	I0127 11:48:43.666155   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.666164   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:43.666171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:43.666237   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:43.701503   70686 cri.go:89] found id: ""
	I0127 11:48:43.701527   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.701537   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:43.701548   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:43.701561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:43.752145   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:43.752177   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:43.766551   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:43.766579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:43.838715   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:43.838740   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:43.838753   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:43.923406   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:43.923439   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:41.934427   70237 addons.go:514] duration metric: took 2.577793658s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:48:44.193593   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:46.470479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:46.483541   70686 kubeadm.go:597] duration metric: took 4m2.154865283s to restartPrimaryControlPlane
	W0127 11:48:46.483635   70686 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:48:46.483664   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:46.956612   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:46.970448   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:46.979726   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:46.990401   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:46.990418   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:46.990456   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:48:46.999850   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:46.999921   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:47.009371   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:48:47.019126   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:47.019177   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:47.029905   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.040611   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:47.040690   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.051767   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:48:47.063007   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:47.063076   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:47.074431   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:47.304989   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:46.196598   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:48.696840   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:49.199550   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.199588   70237 pod_ready.go:82] duration metric: took 9.511896787s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.199600   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205893   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.205926   70237 pod_ready.go:82] duration metric: took 6.298932ms for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205940   70237 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239052   70237 pod_ready.go:93] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.239081   70237 pod_ready.go:82] duration metric: took 33.131129ms for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239094   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265456   70237 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.265491   70237 pod_ready.go:82] duration metric: took 26.386948ms for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265505   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272301   70237 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.272330   70237 pod_ready.go:82] duration metric: took 6.816295ms for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272342   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591592   70237 pod_ready.go:93] pod "kube-proxy-26pw8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.591640   70237 pod_ready.go:82] duration metric: took 319.289955ms for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591655   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991689   70237 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.991721   70237 pod_ready.go:82] duration metric: took 400.056967ms for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991733   70237 pod_ready.go:39] duration metric: took 10.314402994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:49.991751   70237 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:48:49.991813   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:50.013067   70237 api_server.go:72] duration metric: took 10.656516392s to wait for apiserver process to appear ...
	I0127 11:48:50.013088   70237 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:48:50.013114   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:48:50.018115   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 200:
	ok
	I0127 11:48:50.019049   70237 api_server.go:141] control plane version: v1.32.1
	I0127 11:48:50.019078   70237 api_server.go:131] duration metric: took 5.982015ms to wait for apiserver health ...
	I0127 11:48:50.019088   70237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:48:50.196032   70237 system_pods.go:59] 9 kube-system pods found
	I0127 11:48:50.196064   70237 system_pods.go:61] "coredns-668d6bf9bc-pd5ml" [c33b4c24-e93a-4370-a289-6dca24315394] Running
	I0127 11:48:50.196070   70237 system_pods.go:61] "coredns-668d6bf9bc-sdf87" [30fc6237-1829-4315-b9cf-3354bd7a96a5] Running
	I0127 11:48:50.196075   70237 system_pods.go:61] "etcd-default-k8s-diff-port-407489" [d228476b-110d-4de7-9afe-08c2371bbb0e] Running
	I0127 11:48:50.196079   70237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-407489" [a059a0c6-34f1-46c3-9b67-adef842174f9] Running
	I0127 11:48:50.196083   70237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-407489" [aa65ad17-6f66-42c1-ad23-199b374d2104] Running
	I0127 11:48:50.196087   70237 system_pods.go:61] "kube-proxy-26pw8" [c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510] Running
	I0127 11:48:50.196090   70237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-407489" [190cc5cb-ab22-4143-a84a-3c4d975728c3] Running
	I0127 11:48:50.196098   70237 system_pods.go:61] "metrics-server-f79f97bbb-d7r6d" [6bd8680e-8338-48a2-b29b-a913d195bc9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:48:50.196102   70237 system_pods.go:61] "storage-provisioner" [58b014bb-8629-4398-a2ec-6ec95fa59111] Running
	I0127 11:48:50.196111   70237 system_pods.go:74] duration metric: took 177.016669ms to wait for pod list to return data ...
	I0127 11:48:50.196118   70237 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:48:50.392617   70237 default_sa.go:45] found service account: "default"
	I0127 11:48:50.392652   70237 default_sa.go:55] duration metric: took 196.52383ms for default service account to be created ...
	I0127 11:48:50.392664   70237 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:48:50.594360   70237 system_pods.go:87] 9 kube-system pods found
	I0127 11:50:43.920463   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:50:43.920584   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:50:43.922146   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:43.922214   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:43.922320   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:43.922480   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:43.922613   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:43.922673   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:43.924430   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:43.924530   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:43.924611   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:43.924680   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:43.924766   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:43.924851   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:43.924925   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:43.924977   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:43.925025   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:43.925150   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:43.925259   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:43.925316   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:43.925398   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:43.925467   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:43.925544   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:43.925633   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:43.925704   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:43.925839   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:43.925952   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:43.926012   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:43.926098   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:43.927567   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:43.927670   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:43.927749   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:43.927813   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:43.927885   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:43.928078   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:50:43.928123   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:50:43.928184   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928340   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928398   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928569   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928631   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928792   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928850   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929077   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929185   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929391   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929402   70686 kubeadm.go:310] 
	I0127 11:50:43.929456   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:50:43.929518   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:50:43.929531   70686 kubeadm.go:310] 
	I0127 11:50:43.929584   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:50:43.929647   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:50:43.929784   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:50:43.929800   70686 kubeadm.go:310] 
	I0127 11:50:43.929915   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:50:43.929961   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:50:43.930009   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:50:43.930019   70686 kubeadm.go:310] 
	I0127 11:50:43.930137   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:50:43.930253   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:50:43.930266   70686 kubeadm.go:310] 
	I0127 11:50:43.930419   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:50:43.930528   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:50:43.930621   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:50:43.930695   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:50:43.930745   70686 kubeadm.go:310] 
	W0127 11:50:43.930804   70686 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 11:50:43.930840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:50:44.381980   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:50:44.397504   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:50:44.407258   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:50:44.407280   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:50:44.407331   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:50:44.416517   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:50:44.416588   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:50:44.425543   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:50:44.433996   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:50:44.434043   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:50:44.442792   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.452342   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:50:44.452410   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.462650   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:50:44.471925   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:50:44.471985   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:50:44.481004   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:50:44.552326   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:44.552414   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:44.696875   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:44.697032   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:44.697169   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:44.872468   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:44.875109   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:44.875201   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:44.875263   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:44.875350   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:44.875402   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:44.875466   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:44.875514   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:44.875570   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:44.875679   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:44.875792   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:44.875910   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:44.875976   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:44.876030   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:45.015504   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:45.106020   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:45.326707   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:45.574018   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:45.595960   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:45.597194   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:45.597402   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:45.740527   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:45.743100   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:45.743237   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:45.746496   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:45.747484   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:45.748125   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:45.750039   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:51:25.751949   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:51:25.752243   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:25.752539   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:30.752865   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:30.753104   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:40.753548   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:40.753726   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:00.754215   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:00.754448   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753038   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:40.753327   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753353   70686 kubeadm.go:310] 
	I0127 11:52:40.753414   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:52:40.753473   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:52:40.753483   70686 kubeadm.go:310] 
	I0127 11:52:40.753541   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:52:40.753590   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:52:40.753730   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:52:40.753743   70686 kubeadm.go:310] 
	I0127 11:52:40.753898   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:52:40.753957   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:52:40.754014   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:52:40.754030   70686 kubeadm.go:310] 
	I0127 11:52:40.754195   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:52:40.754312   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:52:40.754321   70686 kubeadm.go:310] 
	I0127 11:52:40.754453   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:52:40.754573   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:52:40.754670   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:52:40.754766   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:52:40.754777   70686 kubeadm.go:310] 
	I0127 11:52:40.755376   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:52:40.755478   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:52:40.755572   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:52:40.755648   70686 kubeadm.go:394] duration metric: took 7m56.47359007s to StartCluster
	I0127 11:52:40.755695   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:52:40.755757   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:52:40.792993   70686 cri.go:89] found id: ""
	I0127 11:52:40.793026   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.793045   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:52:40.793055   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:52:40.793116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:52:40.832368   70686 cri.go:89] found id: ""
	I0127 11:52:40.832397   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.832410   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:52:40.832417   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:52:40.832478   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:52:40.865175   70686 cri.go:89] found id: ""
	I0127 11:52:40.865199   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.865208   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:52:40.865215   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:52:40.865280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:52:40.896556   70686 cri.go:89] found id: ""
	I0127 11:52:40.896586   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.896594   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:52:40.896600   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:52:40.896648   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:52:40.928729   70686 cri.go:89] found id: ""
	I0127 11:52:40.928765   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.928777   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:52:40.928784   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:52:40.928852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:52:40.962998   70686 cri.go:89] found id: ""
	I0127 11:52:40.963029   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.963039   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:52:40.963053   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:52:40.963111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:52:40.994577   70686 cri.go:89] found id: ""
	I0127 11:52:40.994606   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.994616   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:52:40.994623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:52:40.994669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:52:41.030825   70686 cri.go:89] found id: ""
	I0127 11:52:41.030861   70686 logs.go:282] 0 containers: []
	W0127 11:52:41.030872   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:52:41.030884   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:52:41.030900   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:52:41.084683   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:52:41.084714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:52:41.098908   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:52:41.098946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:52:41.176430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:52:41.176453   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:52:41.176465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:52:41.290183   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:52:41.290219   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 11:52:41.336066   70686 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 11:52:41.336124   70686 out.go:270] * 
	W0127 11:52:41.336202   70686 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.336227   70686 out.go:270] * 
	W0127 11:52:41.337558   70686 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 11:52:41.341361   70686 out.go:201] 
	W0127 11:52:41.342596   70686 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.342686   70686 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 11:52:41.342709   70686 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 11:52:41.344162   70686 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.887611755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979304887591653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f99d8d6-18cb-4876-a0f7-671fa15e2966 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.888087676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41390a46-9da2-46b2-845a-7226aa09294a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.888182001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41390a46-9da2-46b2-845a-7226aa09294a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.888251742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=41390a46-9da2-46b2-845a-7226aa09294a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.920793629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57d2c19d-0005-4f5a-a68e-07cad396dbf6 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.920917992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57d2c19d-0005-4f5a-a68e-07cad396dbf6 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.922472759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e60daf7-9718-418d-9e53-1fccce578fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.922998471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979304922966367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e60daf7-9718-418d-9e53-1fccce578fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.923656300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=905c3bc2-5b0e-4e8e-99e6-40137aeb9871 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.923746133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=905c3bc2-5b0e-4e8e-99e6-40137aeb9871 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.923798012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=905c3bc2-5b0e-4e8e-99e6-40137aeb9871 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.956194004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cacb60b-4491-4df0-b4a2-7b8dbedd5acd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.956291751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cacb60b-4491-4df0-b4a2-7b8dbedd5acd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.958079438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4ffe94b-c70e-44a8-b8ff-36d6f8cdc186 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.958649824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979304958622517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4ffe94b-c70e-44a8-b8ff-36d6f8cdc186 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.959247502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eedc22e9-bf2f-4011-b4a4-59e093850b63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.959330079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eedc22e9-bf2f-4011-b4a4-59e093850b63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.959377627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eedc22e9-bf2f-4011-b4a4-59e093850b63 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.990009431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4603437b-7af0-41a6-92a1-d5ac01da2ce8 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.990091371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4603437b-7af0-41a6-92a1-d5ac01da2ce8 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.990930328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67dfc25b-23e8-4fc6-a60d-56332f415a2c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.991345258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979304991319929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67dfc25b-23e8-4fc6-a60d-56332f415a2c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.991833207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8da626c-f62c-43ca-a60c-88690514779b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.991895899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8da626c-f62c-43ca-a60c-88690514779b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:01:44 old-k8s-version-570778 crio[639]: time="2025-01-27 12:01:44.991949110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c8da626c-f62c-43ca-a60c-88690514779b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 11:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049235] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.981407] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.993552] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591001] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.590314] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.056000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054815] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.178788] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.126988] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.243997] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.090921] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.064410] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.869247] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[ +12.042296] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 11:48] systemd-fstab-generator[5058]: Ignoring "noauto" option for root device
	[Jan27 11:50] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +0.066337] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:01:45 up 17 min,  0 users,  load average: 0.31, 0.10, 0.08
	Linux old-k8s-version-570778 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc0007a1350, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: net.cgoIPLookup(0xc000133020, 0x48ab5d6, 0x3, 0xc0007a1350, 0x1f)
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: created by net.cgoLookupIP
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: goroutine 121 [select]:
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c76870, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c67f80, 0x0, 0x0)
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000179c00)
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 12:01:41 old-k8s-version-570778 kubelet[6518]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 12:01:41 old-k8s-version-570778 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 12:01:41 old-k8s-version-570778 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 12:01:42 old-k8s-version-570778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 27 12:01:42 old-k8s-version-570778 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 12:01:42 old-k8s-version-570778 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 12:01:42 old-k8s-version-570778 kubelet[6527]: I0127 12:01:42.130023    6527 server.go:416] Version: v1.20.0
	Jan 27 12:01:42 old-k8s-version-570778 kubelet[6527]: I0127 12:01:42.130300    6527 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 12:01:42 old-k8s-version-570778 kubelet[6527]: I0127 12:01:42.132149    6527 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 12:01:42 old-k8s-version-570778 kubelet[6527]: I0127 12:01:42.133066    6527 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 27 12:01:42 old-k8s-version-570778 kubelet[6527]: W0127 12:01:42.133092    6527 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (248.24229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-570778" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (356.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 12:02:30.007577   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 12:02:34.556197   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 12:04:26.924807   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
E0127 12:07:34.555355   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.193:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.193:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (225.595445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-570778" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-570778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-570778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.127µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-570778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (221.995538ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-570778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-570778 logs -n 25: (1.035015765s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:38 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-480798                           | kubernetes-upgrade-480798    | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-091274                              | cert-expiration-091274       | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-429764 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:39 UTC |
	|         | disable-driver-mounts-429764                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:39 UTC | 27 Jan 25 11:41 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-273200             | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-986409            | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:40 UTC | 27 Jan 25 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-407489  | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:43 UTC |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273200                  | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC | 27 Jan 25 11:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273200                                   | no-preload-273200            | jenkins | v1.35.0 | 27 Jan 25 11:41 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-986409                 | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC | 27 Jan 25 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-986409                                  | embed-certs-986409           | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570778        | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-407489       | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC | 27 Jan 25 11:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-407489 | jenkins | v1.35.0 | 27 Jan 25 11:43 UTC |                     |
	|         | default-k8s-diff-port-407489                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570778             | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC | 27 Jan 25 11:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570778                              | old-k8s-version-570778       | jenkins | v1.35.0 | 27 Jan 25 11:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:44:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:44:15.929598   70686 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:44:15.929689   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929697   70686 out.go:358] Setting ErrFile to fd 2...
	I0127 11:44:15.929701   70686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:44:15.929887   70686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:44:15.930463   70686 out.go:352] Setting JSON to false
	I0127 11:44:15.931400   70686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8756,"bootTime":1737969500,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:44:15.931492   70686 start.go:139] virtualization: kvm guest
	I0127 11:44:15.933961   70686 out.go:177] * [old-k8s-version-570778] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:44:15.935491   70686 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:44:15.935496   70686 notify.go:220] Checking for updates...
	I0127 11:44:15.938050   70686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:44:15.939411   70686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:15.940688   70686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:44:15.942034   70686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:44:15.943410   70686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:44:12.181135   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.681538   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:15.945138   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:15.945529   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.945574   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.962483   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0127 11:44:15.963003   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.963519   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.963555   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.963966   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.964195   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:15.965767   70686 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 11:44:15.966927   70686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:44:15.967285   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:15.967321   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:15.981938   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0127 11:44:15.982353   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:15.982892   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:15.982918   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:15.983289   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:15.984121   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.021180   70686 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 11:44:16.022570   70686 start.go:297] selected driver: kvm2
	I0127 11:44:16.022584   70686 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
70778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.022687   70686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:44:16.023358   70686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.023431   70686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 11:44:16.038219   70686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 11:44:16.038645   70686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:44:16.038674   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:16.038706   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:16.038739   70686 start.go:340] cluster config:
	{Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:16.038822   70686 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:44:16.041030   70686 out.go:177] * Starting "old-k8s-version-570778" primary control-plane node in "old-k8s-version-570778" cluster
	I0127 11:44:16.042127   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:16.042176   70686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 11:44:16.042189   70686 cache.go:56] Caching tarball of preloaded images
	I0127 11:44:16.042300   70686 preload.go:172] Found /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 11:44:16.042314   70686 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 11:44:16.042429   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:16.042632   70686 start.go:360] acquireMachinesLock for old-k8s-version-570778: {Name:mk0d593693f827996a7db5925a1ca2a419892abf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:44:16.042691   70686 start.go:364] duration metric: took 36.964µs to acquireMachinesLock for "old-k8s-version-570778"
	I0127 11:44:16.042707   70686 start.go:96] Skipping create...Using existing machine configuration
	I0127 11:44:16.042713   70686 fix.go:54] fixHost starting: 
	I0127 11:44:16.043141   70686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:44:16.043185   70686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:44:16.057334   70686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0127 11:44:16.057814   70686 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:44:16.058319   70686 main.go:141] libmachine: Using API Version  1
	I0127 11:44:16.058342   70686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:44:16.059617   70686 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:44:16.060717   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:16.060891   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetState
	I0127 11:44:16.062560   70686 fix.go:112] recreateIfNeeded on old-k8s-version-570778: state=Stopped err=<nil>
	I0127 11:44:16.062584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	W0127 11:44:16.062740   70686 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 11:44:16.064407   70686 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570778" ...
	I0127 11:44:14.581269   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.080972   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:14.765953   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:17.266323   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:16.065876   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .Start
	I0127 11:44:16.066119   70686 main.go:141] libmachine: (old-k8s-version-570778) starting domain...
	I0127 11:44:16.066142   70686 main.go:141] libmachine: (old-k8s-version-570778) ensuring networks are active...
	I0127 11:44:16.066789   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network default is active
	I0127 11:44:16.067106   70686 main.go:141] libmachine: (old-k8s-version-570778) Ensuring network mk-old-k8s-version-570778 is active
	I0127 11:44:16.067438   70686 main.go:141] libmachine: (old-k8s-version-570778) getting domain XML...
	I0127 11:44:16.068030   70686 main.go:141] libmachine: (old-k8s-version-570778) creating domain...
	I0127 11:44:17.326422   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for IP...
	I0127 11:44:17.327356   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.327887   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.327973   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.327883   70721 retry.go:31] will retry after 224.653843ms: waiting for domain to come up
	I0127 11:44:17.554516   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.555006   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.555033   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.554963   70721 retry.go:31] will retry after 278.652732ms: waiting for domain to come up
	I0127 11:44:17.835676   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:17.836235   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:17.836263   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:17.836216   70721 retry.go:31] will retry after 413.765366ms: waiting for domain to come up
	I0127 11:44:18.251786   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.252318   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.252359   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.252291   70721 retry.go:31] will retry after 384.166802ms: waiting for domain to come up
	I0127 11:44:18.637567   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:18.638099   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:18.638123   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:18.638055   70721 retry.go:31] will retry after 472.449239ms: waiting for domain to come up
	I0127 11:44:19.112411   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.112876   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.112900   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.112842   70721 retry.go:31] will retry after 883.60392ms: waiting for domain to come up
	I0127 11:44:19.997950   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:19.998399   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:19.998421   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:19.998373   70721 retry.go:31] will retry after 736.173761ms: waiting for domain to come up
	I0127 11:44:20.736442   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:20.736964   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:20.737021   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:20.736930   70721 retry.go:31] will retry after 1.379977469s: waiting for domain to come up
	I0127 11:44:17.182032   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.184122   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.581213   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.079928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:19.765581   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.265882   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:22.118774   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:22.119315   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:22.119346   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:22.119278   70721 retry.go:31] will retry after 1.846963021s: waiting for domain to come up
	I0127 11:44:23.968284   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:23.968756   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:23.968788   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:23.968709   70721 retry.go:31] will retry after 1.595738144s: waiting for domain to come up
	I0127 11:44:25.565970   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:25.566464   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:25.566496   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:25.566430   70721 retry.go:31] will retry after 2.837671431s: waiting for domain to come up
	I0127 11:44:21.681373   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:23.682555   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.080232   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.080547   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:24.764338   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:26.766071   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.405715   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:28.406305   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:28.406335   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:28.406277   70721 retry.go:31] will retry after 3.421231106s: waiting for domain to come up
	I0127 11:44:26.181747   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.681419   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.681567   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:28.081045   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:30.579496   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:32.580035   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:29.264366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.264892   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:31.828582   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:31.829032   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | unable to find current IP address of domain old-k8s-version-570778 in network mk-old-k8s-version-570778
	I0127 11:44:31.829085   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | I0127 11:44:31.829004   70721 retry.go:31] will retry after 3.418527811s: waiting for domain to come up
	I0127 11:44:35.249695   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250229   70686 main.go:141] libmachine: (old-k8s-version-570778) found domain IP: 192.168.50.193
	I0127 11:44:35.250264   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has current primary IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.250273   70686 main.go:141] libmachine: (old-k8s-version-570778) reserving static IP address...
	I0127 11:44:35.250765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.250797   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | skip adding static IP to network mk-old-k8s-version-570778 - found existing host DHCP lease matching {name: "old-k8s-version-570778", mac: "52:54:00:8c:78:99", ip: "192.168.50.193"}
	I0127 11:44:35.250814   70686 main.go:141] libmachine: (old-k8s-version-570778) reserved static IP address 192.168.50.193 for domain old-k8s-version-570778
	I0127 11:44:35.250832   70686 main.go:141] libmachine: (old-k8s-version-570778) waiting for SSH...
	I0127 11:44:35.250848   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Getting to WaitForSSH function...
	I0127 11:44:35.253216   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253538   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.253571   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.253691   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH client type: external
	I0127 11:44:35.253719   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa (-rw-------)
	I0127 11:44:35.253750   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 11:44:35.253765   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | About to run SSH command:
	I0127 11:44:35.253782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | exit 0
	I0127 11:44:35.375237   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | SSH cmd err, output: <nil>: 
	I0127 11:44:35.375580   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetConfigRaw
	I0127 11:44:35.376204   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.378824   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379163   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.379195   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.379421   70686 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/config.json ...
	I0127 11:44:35.379692   70686 machine.go:93] provisionDockerMachine start ...
	I0127 11:44:35.379720   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:35.379910   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.382057   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382361   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.382392   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.382559   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.382738   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.382901   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.383079   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.383243   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.383528   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.383542   70686 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:44:35.483536   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:44:35.483585   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.483889   70686 buildroot.go:166] provisioning hostname "old-k8s-version-570778"
	I0127 11:44:35.483924   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.484119   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.487189   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487543   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.487569   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.487813   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.488019   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488147   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.488310   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.488454   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.488629   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.488641   70686 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570778 && echo "old-k8s-version-570778" | sudo tee /etc/hostname
	I0127 11:44:35.606107   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570778
	
	I0127 11:44:35.606140   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.609822   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610293   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.610329   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.610472   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.610663   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610815   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.610983   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.611167   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:35.611325   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:35.611342   70686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570778/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:44:35.720742   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:44:35.720779   70686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-18835/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-18835/.minikube}
	I0127 11:44:35.720803   70686 buildroot.go:174] setting up certificates
	I0127 11:44:35.720814   70686 provision.go:84] configureAuth start
	I0127 11:44:35.720826   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetMachineName
	I0127 11:44:35.721065   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:35.723782   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724254   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.724290   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.724483   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.726871   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.727196   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.727322   70686 provision.go:143] copyHostCerts
	I0127 11:44:35.727369   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem, removing ...
	I0127 11:44:35.727384   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem
	I0127 11:44:35.727452   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/ca.pem (1078 bytes)
	I0127 11:44:35.727537   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem, removing ...
	I0127 11:44:35.727545   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem
	I0127 11:44:35.727569   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/cert.pem (1123 bytes)
	I0127 11:44:35.727649   70686 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem, removing ...
	I0127 11:44:35.727659   70686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem
	I0127 11:44:35.727686   70686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-18835/.minikube/key.pem (1675 bytes)
	I0127 11:44:35.727741   70686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570778 san=[127.0.0.1 192.168.50.193 localhost minikube old-k8s-version-570778]
	I0127 11:44:35.901422   70686 provision.go:177] copyRemoteCerts
	I0127 11:44:35.901473   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:44:35.901501   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:35.904015   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904354   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:35.904378   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:35.904597   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:35.904771   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:35.904967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:35.905126   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:32.681781   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:34.682249   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.078928   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.079470   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.985261   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:44:36.008090   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:44:36.031357   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:44:36.053784   70686 provision.go:87] duration metric: took 332.958985ms to configureAuth
	I0127 11:44:36.053812   70686 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:44:36.053986   70686 config.go:182] Loaded profile config "old-k8s-version-570778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:44:36.054066   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.056825   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057160   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.057186   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.057398   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.057612   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057801   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.057967   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.058191   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.058400   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.058425   70686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:44:36.280974   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:44:36.281007   70686 machine.go:96] duration metric: took 901.295604ms to provisionDockerMachine
	I0127 11:44:36.281020   70686 start.go:293] postStartSetup for "old-k8s-version-570778" (driver="kvm2")
	I0127 11:44:36.281033   70686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:44:36.281048   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.281334   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:44:36.281366   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.283980   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284452   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.284493   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.284602   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.284759   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.284915   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.285033   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.361994   70686 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:44:36.366066   70686 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:44:36.366085   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/addons for local assets ...
	I0127 11:44:36.366142   70686 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-18835/.minikube/files for local assets ...
	I0127 11:44:36.366211   70686 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem -> 260722.pem in /etc/ssl/certs
	I0127 11:44:36.366293   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:44:36.374729   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:36.396427   70686 start.go:296] duration metric: took 115.392742ms for postStartSetup
	I0127 11:44:36.396468   70686 fix.go:56] duration metric: took 20.353754717s for fixHost
	I0127 11:44:36.396491   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.399680   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400070   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.400097   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.400246   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.400438   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400591   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.400821   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.401019   70686 main.go:141] libmachine: Using SSH client type: native
	I0127 11:44:36.401189   70686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 11:44:36.401200   70686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:44:36.500185   70686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978276.474640374
	
	I0127 11:44:36.500211   70686 fix.go:216] guest clock: 1737978276.474640374
	I0127 11:44:36.500221   70686 fix.go:229] Guest: 2025-01-27 11:44:36.474640374 +0000 UTC Remote: 2025-01-27 11:44:36.396473102 +0000 UTC m=+20.504127240 (delta=78.167272ms)
	I0127 11:44:36.500239   70686 fix.go:200] guest clock delta is within tolerance: 78.167272ms
	I0127 11:44:36.500256   70686 start.go:83] releasing machines lock for "old-k8s-version-570778", held for 20.457556974s
	I0127 11:44:36.500274   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.500555   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:36.503395   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503819   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.503860   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.503969   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504404   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504584   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .DriverName
	I0127 11:44:36.504676   70686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:44:36.504723   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.504778   70686 ssh_runner.go:195] Run: cat /version.json
	I0127 11:44:36.504802   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHHostname
	I0127 11:44:36.507787   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.507815   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508140   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508175   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508207   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:36.508225   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:36.508347   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508547   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHPort
	I0127 11:44:36.508557   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508735   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.508749   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHKeyPath
	I0127 11:44:36.508887   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.509027   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetSSHUsername
	I0127 11:44:36.509185   70686 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/old-k8s-version-570778/id_rsa Username:docker}
	I0127 11:44:36.584389   70686 ssh_runner.go:195] Run: systemctl --version
	I0127 11:44:36.606466   70686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:44:36.746477   70686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:44:36.751936   70686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:44:36.751996   70686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:44:36.768698   70686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:44:36.768722   70686 start.go:495] detecting cgroup driver to use...
	I0127 11:44:36.768788   70686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:44:36.786842   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:44:36.799832   70686 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:44:36.799893   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:44:36.813751   70686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:44:36.827731   70686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:44:36.943310   70686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:44:37.088722   70686 docker.go:233] disabling docker service ...
	I0127 11:44:37.088789   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:44:37.103240   70686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:44:37.116205   70686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:44:37.254006   70686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:44:37.365382   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:44:37.379019   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:44:37.396330   70686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 11:44:37.396405   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.406845   70686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:44:37.406919   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.417968   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.428079   70686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:44:37.438133   70686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:44:37.448951   70686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:44:37.458320   70686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:44:37.458382   70686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:44:37.476279   70686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:44:37.486232   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:37.609635   70686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:44:37.703117   70686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:44:37.703185   70686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:44:37.707780   70686 start.go:563] Will wait 60s for crictl version
	I0127 11:44:37.707827   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:37.711561   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:44:37.746285   70686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 11:44:37.746370   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.774346   70686 ssh_runner.go:195] Run: crio --version
	I0127 11:44:37.804220   70686 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 11:44:33.764774   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:35.764854   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.765730   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:37.805652   70686 main.go:141] libmachine: (old-k8s-version-570778) Calling .GetIP
	I0127 11:44:37.808777   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809130   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:78:99", ip: ""} in network mk-old-k8s-version-570778: {Iface:virbr3 ExpiryTime:2025-01-27 12:44:27 +0000 UTC Type:0 Mac:52:54:00:8c:78:99 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:old-k8s-version-570778 Clientid:01:52:54:00:8c:78:99}
	I0127 11:44:37.809168   70686 main.go:141] libmachine: (old-k8s-version-570778) DBG | domain old-k8s-version-570778 has defined IP address 192.168.50.193 and MAC address 52:54:00:8c:78:99 in network mk-old-k8s-version-570778
	I0127 11:44:37.809355   70686 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 11:44:37.813621   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:37.826271   70686 kubeadm.go:883] updating cluster {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:44:37.826370   70686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:44:37.826406   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:37.875128   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:37.875204   70686 ssh_runner.go:195] Run: which lz4
	I0127 11:44:37.879162   70686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:44:37.883378   70686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:44:37.883408   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 11:44:39.317688   70686 crio.go:462] duration metric: took 1.438551878s to copy over tarball
	I0127 11:44:39.317750   70686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:44:37.181878   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.183457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.081149   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:41.579699   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:39.767830   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.265799   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:42.264081   70686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946305063s)
	I0127 11:44:42.264109   70686 crio.go:469] duration metric: took 2.946394656s to extract the tarball
	I0127 11:44:42.264117   70686 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:44:42.307411   70686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:44:42.344143   70686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 11:44:42.344169   70686 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:44:42.344233   70686 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.344271   70686 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.344279   70686 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.344249   70686 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.344344   70686 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.344362   70686 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 11:44:42.344415   70686 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.344314   70686 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.345773   70686 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.346448   70686 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.346465   70686 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.346515   70686 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:42.346454   70686 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.346547   70686 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.488970   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.490931   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.497125   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.504183   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.508337   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.519103   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.523858   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 11:44:42.600152   70686 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 11:44:42.600208   70686 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.600258   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629803   70686 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 11:44:42.629847   70686 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.629897   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.629956   70686 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 11:44:42.629990   70686 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.630029   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656649   70686 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 11:44:42.656693   70686 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.656693   70686 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 11:44:42.656723   70686 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.656736   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.656763   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.669267   70686 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 11:44:42.669313   70686 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.669350   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677774   70686 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 11:44:42.677823   70686 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 11:44:42.677876   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.677890   70686 ssh_runner.go:195] Run: which crictl
	I0127 11:44:42.677969   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.677987   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.678027   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.678039   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.678069   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.787131   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.787197   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.787314   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.813675   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.816360   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.816416   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.816437   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:42.930195   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:42.930298   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 11:44:42.930333   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 11:44:42.930346   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 11:44:42.971335   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 11:44:42.971389   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 11:44:42.971398   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 11:44:43.068772   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 11:44:43.068871   70686 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 11:44:43.068882   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 11:44:43.068892   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 11:44:43.097755   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 11:44:43.097781   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 11:44:43.099343   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 11:44:43.116136   70686 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 11:44:43.303986   70686 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:44:43.439716   70686 cache_images.go:92] duration metric: took 1.095530522s to LoadCachedImages
	W0127 11:44:43.439813   70686 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20319-18835/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 11:44:43.439832   70686 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.20.0 crio true true} ...
	I0127 11:44:43.439974   70686 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570778 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:44:43.440069   70686 ssh_runner.go:195] Run: crio config
	I0127 11:44:43.491732   70686 cni.go:84] Creating CNI manager for ""
	I0127 11:44:43.491754   70686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:44:43.491765   70686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:44:43.491782   70686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570778 NodeName:old-k8s-version-570778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 11:44:43.491897   70686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570778"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:44:43.491951   70686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 11:44:43.501539   70686 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:44:43.501593   70686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:44:43.510444   70686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 11:44:43.526994   70686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:44:43.542977   70686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 11:44:43.559986   70686 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 11:44:43.564089   70686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:44:43.576120   70686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:44:43.702431   70686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:44:43.719740   70686 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778 for IP: 192.168.50.193
	I0127 11:44:43.719759   70686 certs.go:194] generating shared ca certs ...
	I0127 11:44:43.719773   70686 certs.go:226] acquiring lock for ca certs: {Name:mk28a488136a8ad706c6def5e8c32b522421b34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:43.719941   70686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key
	I0127 11:44:43.720011   70686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key
	I0127 11:44:43.720024   70686 certs.go:256] generating profile certs ...
	I0127 11:44:43.810274   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.key
	I0127 11:44:43.810422   70686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key.1541225f
	I0127 11:44:43.810480   70686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key
	I0127 11:44:43.810641   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem (1338 bytes)
	W0127 11:44:43.810684   70686 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072_empty.pem, impossibly tiny 0 bytes
	I0127 11:44:43.810697   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 11:44:43.810727   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/ca.pem (1078 bytes)
	I0127 11:44:43.810761   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:44:43.810789   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/certs/key.pem (1675 bytes)
	I0127 11:44:43.810838   70686 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem (1708 bytes)
	I0127 11:44:43.811665   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:44:43.856247   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:44:43.898135   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:44:43.938193   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:44:43.960927   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:44:43.984028   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:44:44.008415   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:44:44.030915   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:44:44.055340   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/certs/26072.pem --> /usr/share/ca-certificates/26072.pem (1338 bytes)
	I0127 11:44:44.077556   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/ssl/certs/260722.pem --> /usr/share/ca-certificates/260722.pem (1708 bytes)
	I0127 11:44:44.101525   70686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:44:44.124400   70686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:44:44.140292   70686 ssh_runner.go:195] Run: openssl version
	I0127 11:44:44.145827   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/260722.pem && ln -fs /usr/share/ca-certificates/260722.pem /etc/ssl/certs/260722.pem"
	I0127 11:44:44.155834   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.159949   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.160022   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/260722.pem
	I0127 11:44:44.165584   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/260722.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:44:44.178174   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:44:44.189759   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.194947   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.195006   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:44:44.200696   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:44:44.211199   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26072.pem && ln -fs /usr/share/ca-certificates/26072.pem /etc/ssl/certs/26072.pem"
	I0127 11:44:44.221194   70686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225257   70686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.225297   70686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26072.pem
	I0127 11:44:44.230582   70686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26072.pem /etc/ssl/certs/51391683.0"
	I0127 11:44:44.240578   70686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:44:44.245082   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 11:44:44.252016   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 11:44:44.257760   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 11:44:44.264902   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 11:44:44.270934   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 11:44:44.276642   70686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 11:44:44.282062   70686 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570778 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:44:44.282152   70686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:44:44.282190   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.318594   70686 cri.go:89] found id: ""
	I0127 11:44:44.318650   70686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:44:44.328642   70686 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 11:44:44.328665   70686 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 11:44:44.328716   70686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 11:44:44.337760   70686 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:44:44.338436   70686 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570778" does not appear in /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:44:44.338787   70686 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-18835/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570778" cluster setting kubeconfig missing "old-k8s-version-570778" context setting]
	I0127 11:44:44.339275   70686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:44:44.379353   70686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 11:44:44.389831   70686 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0127 11:44:44.389864   70686 kubeadm.go:1160] stopping kube-system containers ...
	I0127 11:44:44.389876   70686 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 11:44:44.389917   70686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:44:44.429276   70686 cri.go:89] found id: ""
	I0127 11:44:44.429352   70686 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 11:44:44.446502   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:44:44.456332   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:44:44.456358   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:44:44.456406   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:44:44.465009   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:44:44.465064   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:44:44.474468   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:44:44.483271   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:44:44.483333   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:44:44.493091   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.501826   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:44:44.501887   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:44:44.511619   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:44:44.520146   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:44:44.520215   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:44:44.529284   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:44:44.538474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:44.669112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.430626   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.649318   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.747035   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 11:44:45.834253   70686 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:44:45.834345   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:41.682339   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.682496   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:43.911112   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.080526   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:44.265972   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.765113   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:46.334836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.834834   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.334682   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:47.834945   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.335112   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:48.834442   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.335101   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:49.835321   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.334868   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:50.835371   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:46.181944   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.681423   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:48.580901   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.079391   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:49.265367   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.765180   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:51.335142   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.835388   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.334604   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:52.835044   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.334680   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:53.834411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:54.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.335010   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:55.834554   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:51.181432   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.681540   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:53.081988   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:55.580478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:54.265141   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.265203   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.265900   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:56.335128   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.335140   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:57.835042   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.334817   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:58.834443   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.334777   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:59.835437   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.334852   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:00.834590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:44:56.182005   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.681494   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:44:58.079513   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.079905   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:02.080706   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:00.765897   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.265622   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:01.335351   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.835115   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.334828   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:02.834481   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.334592   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:03.834653   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.335201   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:04.834728   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.334872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:05.835121   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:01.181668   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:03.182704   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.681195   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:04.579620   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.079240   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:05.765054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:07.765605   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:06.335002   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:06.835393   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.334717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:07.835225   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.335465   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.835195   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.335007   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:09.835362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.334590   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:10.835441   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:08.180735   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.181326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:09.079806   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.081218   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:10.264844   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:12.765530   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:11.334541   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:11.835283   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.335343   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.834836   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.335067   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:13.834637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.334394   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:14.834608   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.334668   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:15.835178   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:12.181440   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:14.182012   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:13.579850   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.580199   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:15.265832   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:17.765291   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:16.334479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.835000   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.335139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:17.835227   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.335309   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:18.835170   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.334384   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:19.835348   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.334845   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:20.835383   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:16.681535   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.181289   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:18.080468   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:20.579930   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.580421   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:19.765695   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:22.264793   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:21.335090   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.834734   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.335362   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:22.834567   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.335485   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:23.835040   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.334533   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:24.834544   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.334975   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:25.834941   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:21.682460   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.181465   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:25.080118   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:27.579811   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:24.265167   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.265742   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:26.334897   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.834607   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.334771   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:27.834733   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.335354   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:28.834876   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.335076   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:29.835095   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.334594   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:30.834603   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:26.181841   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.680961   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:30.079284   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:32.079751   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:28.765734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.266015   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:31.335153   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.834967   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.335109   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:32.834477   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.335107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:33.835110   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.334563   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:34.835358   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.334401   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:35.835107   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:31.185937   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.680940   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:35.681777   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:34.580737   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:37.080749   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:33.765617   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.265646   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:38.266295   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:36.335163   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:36.835139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.334510   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.834447   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.334776   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:38.834844   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.334806   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:39.835253   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.334905   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:40.834948   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:37.682410   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.182049   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:39.579328   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.580544   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:40.765177   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:43.265601   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:41.334866   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:41.834518   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.335359   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:42.834415   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.335098   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:43.834540   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.335306   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:44.834575   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.335244   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:45.835032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:45.835116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:45.868609   70686 cri.go:89] found id: ""
	I0127 11:45:45.868640   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.868652   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:45.868659   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:45.868718   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:45.907767   70686 cri.go:89] found id: ""
	I0127 11:45:45.907796   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.907805   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:45.907812   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:45.907870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:42.182202   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.680856   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:44.079255   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:46.079779   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.765111   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:47.765359   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:45.940736   70686 cri.go:89] found id: ""
	I0127 11:45:45.940781   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.940791   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:45.940800   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:45.940945   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:45.972511   70686 cri.go:89] found id: ""
	I0127 11:45:45.972536   70686 logs.go:282] 0 containers: []
	W0127 11:45:45.972544   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:45.972550   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:45.972621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:46.004929   70686 cri.go:89] found id: ""
	I0127 11:45:46.004958   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.004966   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:46.004971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:46.005020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:46.037172   70686 cri.go:89] found id: ""
	I0127 11:45:46.037205   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.037217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:46.037224   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:46.037284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:46.070282   70686 cri.go:89] found id: ""
	I0127 11:45:46.070311   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.070322   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:46.070330   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:46.070387   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:46.106109   70686 cri.go:89] found id: ""
	I0127 11:45:46.106139   70686 logs.go:282] 0 containers: []
	W0127 11:45:46.106150   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:46.106163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:46.106176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:46.147686   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:46.147719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:46.199085   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:46.199119   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:46.212487   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:46.212515   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:46.331675   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:46.331698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:46.331710   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:48.902413   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:48.915872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:48.915933   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:48.950168   70686 cri.go:89] found id: ""
	I0127 11:45:48.950215   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.950223   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:48.950229   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:48.950280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:48.981915   70686 cri.go:89] found id: ""
	I0127 11:45:48.981947   70686 logs.go:282] 0 containers: []
	W0127 11:45:48.981958   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:48.981966   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:48.982030   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:49.022418   70686 cri.go:89] found id: ""
	I0127 11:45:49.022448   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.022461   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:49.022468   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:49.022531   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:49.066138   70686 cri.go:89] found id: ""
	I0127 11:45:49.066164   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.066174   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:49.066181   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:49.066240   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:49.107856   70686 cri.go:89] found id: ""
	I0127 11:45:49.107887   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.107895   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:49.107901   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:49.107951   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:49.158460   70686 cri.go:89] found id: ""
	I0127 11:45:49.158492   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.158519   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:49.158545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:49.158608   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:49.194805   70686 cri.go:89] found id: ""
	I0127 11:45:49.194831   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.194839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:49.194844   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:49.194889   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:49.227445   70686 cri.go:89] found id: ""
	I0127 11:45:49.227475   70686 logs.go:282] 0 containers: []
	W0127 11:45:49.227483   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:49.227491   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:49.227502   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:49.280386   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:49.280418   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:49.293755   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:49.293785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:49.366338   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:49.366366   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:49.366381   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:49.444064   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:49.444102   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:47.182717   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:49.681160   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:48.080162   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.579311   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.580182   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:50.266104   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:52.266221   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:51.990077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:52.002185   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:52.002244   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:52.033585   70686 cri.go:89] found id: ""
	I0127 11:45:52.033608   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.033616   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:52.033622   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:52.033671   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:52.063740   70686 cri.go:89] found id: ""
	I0127 11:45:52.063766   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.063776   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:52.063784   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:52.063846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:52.098052   70686 cri.go:89] found id: ""
	I0127 11:45:52.098089   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.098115   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:52.098122   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:52.098186   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:52.130011   70686 cri.go:89] found id: ""
	I0127 11:45:52.130039   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.130048   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:52.130057   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:52.130101   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:52.163864   70686 cri.go:89] found id: ""
	I0127 11:45:52.163887   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.163894   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:52.163899   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:52.163946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:52.195990   70686 cri.go:89] found id: ""
	I0127 11:45:52.196020   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.196029   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:52.196034   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:52.196079   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:52.227747   70686 cri.go:89] found id: ""
	I0127 11:45:52.227780   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.227792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:52.227799   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:52.227860   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:52.262186   70686 cri.go:89] found id: ""
	I0127 11:45:52.262214   70686 logs.go:282] 0 containers: []
	W0127 11:45:52.262224   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:52.262234   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:52.262249   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:52.318567   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:52.318603   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:52.332621   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:52.332646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:52.403429   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:52.403451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:52.403462   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:52.482267   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:52.482309   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.018478   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:55.032583   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:55.032655   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:55.070418   70686 cri.go:89] found id: ""
	I0127 11:45:55.070446   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.070454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:55.070460   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:55.070534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:55.102785   70686 cri.go:89] found id: ""
	I0127 11:45:55.102820   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.102831   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:55.102837   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:55.102893   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:55.140432   70686 cri.go:89] found id: ""
	I0127 11:45:55.140466   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.140477   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:55.140483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:55.140548   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:55.173071   70686 cri.go:89] found id: ""
	I0127 11:45:55.173097   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.173107   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:55.173115   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:55.173175   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:55.207834   70686 cri.go:89] found id: ""
	I0127 11:45:55.207867   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.207878   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:55.207886   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:55.207949   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:55.240758   70686 cri.go:89] found id: ""
	I0127 11:45:55.240786   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.240794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:55.240807   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:55.240852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:55.276038   70686 cri.go:89] found id: ""
	I0127 11:45:55.276067   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.276078   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:55.276085   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:55.276135   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:55.307786   70686 cri.go:89] found id: ""
	I0127 11:45:55.307818   70686 logs.go:282] 0 containers: []
	W0127 11:45:55.307829   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:55.307841   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:55.307855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:55.384874   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:55.384908   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:55.425141   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:55.425169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:55.479108   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:55.479144   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:55.492988   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:55.493018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:55.557856   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:51.681649   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:53.681709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.580408   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:57.079629   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:54.765284   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:56.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.059727   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:45:58.072633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:45:58.072713   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:45:58.107460   70686 cri.go:89] found id: ""
	I0127 11:45:58.107494   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.107505   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:45:58.107513   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:45:58.107570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:45:58.143678   70686 cri.go:89] found id: ""
	I0127 11:45:58.143709   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.143721   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:45:58.143729   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:45:58.143794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:45:58.177914   70686 cri.go:89] found id: ""
	I0127 11:45:58.177942   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.177949   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:45:58.177957   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:45:58.178003   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:45:58.210641   70686 cri.go:89] found id: ""
	I0127 11:45:58.210679   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.210690   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:45:58.210698   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:45:58.210759   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:45:58.242373   70686 cri.go:89] found id: ""
	I0127 11:45:58.242408   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.242420   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:45:58.242427   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:45:58.242494   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:45:58.277921   70686 cri.go:89] found id: ""
	I0127 11:45:58.277954   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.277965   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:45:58.277973   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:45:58.278033   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:45:58.310342   70686 cri.go:89] found id: ""
	I0127 11:45:58.310373   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.310384   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:45:58.310391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:45:58.310459   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:45:58.345616   70686 cri.go:89] found id: ""
	I0127 11:45:58.345649   70686 logs.go:282] 0 containers: []
	W0127 11:45:58.345660   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:45:58.345671   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:45:58.345687   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:45:58.380655   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:45:58.380680   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:45:58.433828   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:45:58.433859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:45:58.447666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:45:58.447703   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:45:58.510668   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:45:58.510698   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:45:58.510714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:45:56.181754   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:58.682655   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.080820   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.580837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:45:59.266054   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.766023   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:01.087242   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:01.099871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:01.099926   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:01.132252   70686 cri.go:89] found id: ""
	I0127 11:46:01.132285   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.132293   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:01.132298   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:01.132348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:01.163920   70686 cri.go:89] found id: ""
	I0127 11:46:01.163949   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.163960   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:01.163967   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:01.164034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:01.198833   70686 cri.go:89] found id: ""
	I0127 11:46:01.198858   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.198865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:01.198871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:01.198916   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:01.238722   70686 cri.go:89] found id: ""
	I0127 11:46:01.238753   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.238763   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:01.238779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:01.238844   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:01.272868   70686 cri.go:89] found id: ""
	I0127 11:46:01.272892   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.272898   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:01.272903   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:01.272947   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:01.307986   70686 cri.go:89] found id: ""
	I0127 11:46:01.308015   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.308024   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:01.308029   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:01.308082   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:01.341997   70686 cri.go:89] found id: ""
	I0127 11:46:01.342027   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.342039   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:01.342047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:01.342109   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:01.374940   70686 cri.go:89] found id: ""
	I0127 11:46:01.374968   70686 logs.go:282] 0 containers: []
	W0127 11:46:01.374978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:01.374989   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:01.375002   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:01.428465   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:01.428500   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:01.442684   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:01.442708   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:01.512159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.512185   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:01.512198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:01.586215   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:01.586265   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.127745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:04.140798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:04.140873   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:04.175150   70686 cri.go:89] found id: ""
	I0127 11:46:04.175186   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.175197   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:04.175204   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:04.175282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:04.210697   70686 cri.go:89] found id: ""
	I0127 11:46:04.210727   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.210736   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:04.210744   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:04.210800   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:04.240777   70686 cri.go:89] found id: ""
	I0127 11:46:04.240803   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.240811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:04.240821   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:04.240865   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:04.273040   70686 cri.go:89] found id: ""
	I0127 11:46:04.273076   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.273087   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:04.273094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:04.273151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:04.308441   70686 cri.go:89] found id: ""
	I0127 11:46:04.308468   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.308478   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:04.308484   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:04.308546   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:04.346756   70686 cri.go:89] found id: ""
	I0127 11:46:04.346783   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.346793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:04.346802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:04.346870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:04.381718   70686 cri.go:89] found id: ""
	I0127 11:46:04.381747   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.381758   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:04.381766   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:04.381842   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:04.415875   70686 cri.go:89] found id: ""
	I0127 11:46:04.415913   70686 logs.go:282] 0 containers: []
	W0127 11:46:04.415921   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:04.415930   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:04.415942   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:04.499951   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:04.499990   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:04.539557   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:04.539592   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:04.595977   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:04.596011   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:04.609081   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:04.609107   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:04.678937   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:01.181382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.681326   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:05.682184   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:03.581478   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.079382   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:04.266171   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:06.765288   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:07.179760   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:07.193186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:07.193259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:07.226455   70686 cri.go:89] found id: ""
	I0127 11:46:07.226487   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.226498   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:07.226507   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:07.226570   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:07.259391   70686 cri.go:89] found id: ""
	I0127 11:46:07.259427   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.259439   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:07.259447   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:07.259520   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:07.295281   70686 cri.go:89] found id: ""
	I0127 11:46:07.295314   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.295326   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:07.295334   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:07.295384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:07.330145   70686 cri.go:89] found id: ""
	I0127 11:46:07.330177   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.330186   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:07.330194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:07.330260   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:07.368846   70686 cri.go:89] found id: ""
	I0127 11:46:07.368875   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.368882   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:07.368889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:07.368938   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:07.404802   70686 cri.go:89] found id: ""
	I0127 11:46:07.404832   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.404843   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:07.404851   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:07.404914   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:07.437053   70686 cri.go:89] found id: ""
	I0127 11:46:07.437081   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.437090   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:07.437096   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:07.437142   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:07.474455   70686 cri.go:89] found id: ""
	I0127 11:46:07.474482   70686 logs.go:282] 0 containers: []
	W0127 11:46:07.474490   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:07.474498   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:07.474510   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:07.529193   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:07.529229   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:07.543329   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:07.543365   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:07.623019   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:07.623043   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:07.623057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:07.701237   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:07.701277   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:10.239258   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:10.252360   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:10.252423   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:10.288112   70686 cri.go:89] found id: ""
	I0127 11:46:10.288135   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.288143   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:10.288149   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:10.288195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:10.323260   70686 cri.go:89] found id: ""
	I0127 11:46:10.323288   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.323296   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:10.323302   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:10.323358   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:10.358662   70686 cri.go:89] found id: ""
	I0127 11:46:10.358686   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.358694   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:10.358700   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:10.358744   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:10.397231   70686 cri.go:89] found id: ""
	I0127 11:46:10.397262   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.397273   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:10.397281   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:10.397384   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:10.430384   70686 cri.go:89] found id: ""
	I0127 11:46:10.430411   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.430419   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:10.430425   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:10.430490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:10.461361   70686 cri.go:89] found id: ""
	I0127 11:46:10.461387   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.461396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:10.461404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:10.461464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:10.497276   70686 cri.go:89] found id: ""
	I0127 11:46:10.497309   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.497318   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:10.497324   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:10.497389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:10.530718   70686 cri.go:89] found id: ""
	I0127 11:46:10.530751   70686 logs.go:282] 0 containers: []
	W0127 11:46:10.530762   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:10.530772   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:10.530785   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:10.578801   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:10.578839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:10.591288   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:10.591312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:10.655021   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:10.655051   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:10.655065   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:10.731115   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:10.731151   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:08.181149   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.681951   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.079678   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:10.079837   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:12.580869   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:08.766699   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:11.265066   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.265843   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:13.267173   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:13.280623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:13.280688   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:13.314325   70686 cri.go:89] found id: ""
	I0127 11:46:13.314362   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.314372   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:13.314380   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:13.314441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:13.346889   70686 cri.go:89] found id: ""
	I0127 11:46:13.346918   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.346929   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:13.346936   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:13.346989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:13.378900   70686 cri.go:89] found id: ""
	I0127 11:46:13.378929   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.378939   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:13.378945   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:13.379004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:13.412919   70686 cri.go:89] found id: ""
	I0127 11:46:13.412952   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.412963   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:13.412971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:13.413027   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:13.444222   70686 cri.go:89] found id: ""
	I0127 11:46:13.444250   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.444260   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:13.444266   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:13.444317   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:13.474180   70686 cri.go:89] found id: ""
	I0127 11:46:13.474206   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.474212   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:13.474218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:13.474277   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:13.507679   70686 cri.go:89] found id: ""
	I0127 11:46:13.507707   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.507718   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:13.507726   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:13.507785   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:13.540402   70686 cri.go:89] found id: ""
	I0127 11:46:13.540428   70686 logs.go:282] 0 containers: []
	W0127 11:46:13.540436   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:13.540444   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:13.540454   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:13.619310   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:13.619341   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:13.659541   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:13.659568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:13.710958   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:13.710992   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:13.724362   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:13.724387   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:13.799175   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:13.181930   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.681382   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.080714   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:17.580030   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:15.766366   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:18.265607   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:16.299872   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:16.313092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:16.313151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:16.344606   70686 cri.go:89] found id: ""
	I0127 11:46:16.344636   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.344647   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:16.344654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:16.344709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:16.378025   70686 cri.go:89] found id: ""
	I0127 11:46:16.378052   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.378060   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:16.378065   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:16.378112   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:16.409333   70686 cri.go:89] found id: ""
	I0127 11:46:16.409359   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.409366   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:16.409372   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:16.409417   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:16.440176   70686 cri.go:89] found id: ""
	I0127 11:46:16.440199   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.440207   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:16.440218   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:16.440303   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:16.474293   70686 cri.go:89] found id: ""
	I0127 11:46:16.474325   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.474333   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:16.474339   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:16.474386   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:16.505778   70686 cri.go:89] found id: ""
	I0127 11:46:16.505801   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.505808   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:16.505814   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:16.505867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:16.540769   70686 cri.go:89] found id: ""
	I0127 11:46:16.540797   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.540807   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:16.540815   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:16.540870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:16.576592   70686 cri.go:89] found id: ""
	I0127 11:46:16.576620   70686 logs.go:282] 0 containers: []
	W0127 11:46:16.576630   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:16.576640   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:16.576652   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:16.653408   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:16.653443   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:16.692433   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:16.692458   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:16.740803   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:16.740837   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:16.753287   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:16.753312   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:16.826095   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.327736   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:19.340166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:19.340220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:19.371540   70686 cri.go:89] found id: ""
	I0127 11:46:19.371578   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.371591   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:19.371600   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:19.371673   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:19.404729   70686 cri.go:89] found id: ""
	I0127 11:46:19.404764   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.404774   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:19.404781   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:19.404837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:19.439789   70686 cri.go:89] found id: ""
	I0127 11:46:19.439825   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.439837   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:19.439846   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:19.439906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:19.470570   70686 cri.go:89] found id: ""
	I0127 11:46:19.470600   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.470611   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:19.470619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:19.470681   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:19.501777   70686 cri.go:89] found id: ""
	I0127 11:46:19.501805   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.501816   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:19.501824   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:19.501880   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:19.534181   70686 cri.go:89] found id: ""
	I0127 11:46:19.534210   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.534217   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:19.534223   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:19.534284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:19.566593   70686 cri.go:89] found id: ""
	I0127 11:46:19.566620   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.566628   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:19.566633   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:19.566693   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:19.599915   70686 cri.go:89] found id: ""
	I0127 11:46:19.599940   70686 logs.go:282] 0 containers: []
	W0127 11:46:19.599951   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:19.599966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:19.599981   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:19.650351   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:19.650385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:19.663542   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:19.663567   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:19.734523   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:19.734552   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:19.734568   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:19.808148   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:19.808182   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:18.181077   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.181255   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:19.580896   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.079867   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:20.765484   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:22.345687   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:22.359497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:22.359568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:22.392346   70686 cri.go:89] found id: ""
	I0127 11:46:22.392372   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.392381   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:22.392386   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:22.392443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:22.425056   70686 cri.go:89] found id: ""
	I0127 11:46:22.425081   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.425089   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:22.425093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:22.425146   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:22.460472   70686 cri.go:89] found id: ""
	I0127 11:46:22.460501   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.460512   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:22.460519   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:22.460580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:22.494621   70686 cri.go:89] found id: ""
	I0127 11:46:22.494646   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.494656   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:22.494663   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:22.494724   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:22.531878   70686 cri.go:89] found id: ""
	I0127 11:46:22.531902   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.531909   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:22.531914   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:22.531961   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:22.566924   70686 cri.go:89] found id: ""
	I0127 11:46:22.566946   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.566953   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:22.566960   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:22.567019   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:22.601357   70686 cri.go:89] found id: ""
	I0127 11:46:22.601384   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.601394   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:22.601402   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:22.601467   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:22.634574   70686 cri.go:89] found id: ""
	I0127 11:46:22.634611   70686 logs.go:282] 0 containers: []
	W0127 11:46:22.634620   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:22.634631   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:22.634641   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:22.683998   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:22.684027   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:22.697042   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:22.697068   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:22.758991   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.759018   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:22.759034   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:22.837791   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:22.837824   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.374998   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:25.387470   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:25.387527   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:25.419525   70686 cri.go:89] found id: ""
	I0127 11:46:25.419552   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.419559   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:25.419565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:25.419637   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:25.452027   70686 cri.go:89] found id: ""
	I0127 11:46:25.452051   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.452059   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:25.452064   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:25.452111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:25.482868   70686 cri.go:89] found id: ""
	I0127 11:46:25.482899   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.482909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:25.482916   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:25.482978   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:25.513413   70686 cri.go:89] found id: ""
	I0127 11:46:25.513438   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.513447   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:25.513453   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:25.513497   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:25.544499   70686 cri.go:89] found id: ""
	I0127 11:46:25.544525   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.544534   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:25.544545   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:25.544591   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:25.576649   70686 cri.go:89] found id: ""
	I0127 11:46:25.576676   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.576686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:25.576694   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:25.576749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:25.613447   70686 cri.go:89] found id: ""
	I0127 11:46:25.613476   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.613483   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:25.613489   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:25.613547   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:25.645468   70686 cri.go:89] found id: ""
	I0127 11:46:25.645492   70686 logs.go:282] 0 containers: []
	W0127 11:46:25.645503   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:25.645513   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:25.645530   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:25.724060   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:25.724112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:25.758966   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:25.759001   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:25.809187   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:25.809218   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:25.822532   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:25.822563   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:25.889713   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:22.682762   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.180989   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:24.580025   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.079771   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:25.265011   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:27.265712   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:28.390290   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:28.402720   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:28.402794   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:28.433933   70686 cri.go:89] found id: ""
	I0127 11:46:28.433960   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.433971   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:28.433979   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:28.434037   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:28.465830   70686 cri.go:89] found id: ""
	I0127 11:46:28.465864   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.465874   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:28.465881   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:28.465939   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:28.497527   70686 cri.go:89] found id: ""
	I0127 11:46:28.497562   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.497570   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:28.497579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:28.497645   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:28.531270   70686 cri.go:89] found id: ""
	I0127 11:46:28.531299   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.531308   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:28.531316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:28.531371   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:28.563348   70686 cri.go:89] found id: ""
	I0127 11:46:28.563369   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.563376   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:28.563381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:28.563426   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:28.596997   70686 cri.go:89] found id: ""
	I0127 11:46:28.597020   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.597027   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:28.597032   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:28.597078   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:28.631710   70686 cri.go:89] found id: ""
	I0127 11:46:28.631744   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.631756   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:28.631763   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:28.631822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:28.691511   70686 cri.go:89] found id: ""
	I0127 11:46:28.691543   70686 logs.go:282] 0 containers: []
	W0127 11:46:28.691554   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:28.691565   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:28.691579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:28.742602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:28.742635   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:28.756184   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:28.756207   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:28.830835   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:28.830857   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:28.830868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:28.905594   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:28.905630   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:27.181377   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.682869   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.580416   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:32.080512   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:29.765386   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.766041   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:31.441466   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:31.453810   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:31.453884   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:31.486385   70686 cri.go:89] found id: ""
	I0127 11:46:31.486419   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.486428   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:31.486433   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:31.486486   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:31.518387   70686 cri.go:89] found id: ""
	I0127 11:46:31.518414   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.518422   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:31.518427   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:31.518487   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:31.553495   70686 cri.go:89] found id: ""
	I0127 11:46:31.553519   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.553527   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:31.553532   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:31.553585   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:31.587152   70686 cri.go:89] found id: ""
	I0127 11:46:31.587178   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.587187   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:31.587194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:31.587249   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:31.617431   70686 cri.go:89] found id: ""
	I0127 11:46:31.617459   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.617468   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:31.617474   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:31.617519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:31.651686   70686 cri.go:89] found id: ""
	I0127 11:46:31.651712   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.651720   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:31.651725   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:31.651771   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:31.684941   70686 cri.go:89] found id: ""
	I0127 11:46:31.684967   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.684977   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:31.684984   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:31.685042   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:31.718413   70686 cri.go:89] found id: ""
	I0127 11:46:31.718440   70686 logs.go:282] 0 containers: []
	W0127 11:46:31.718451   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:31.718461   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:31.718476   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:31.767445   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:31.767470   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:31.780922   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:31.780949   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:31.846438   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:31.846462   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:31.846474   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:31.926888   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:31.926923   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.465125   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:34.479852   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:34.479930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:34.511060   70686 cri.go:89] found id: ""
	I0127 11:46:34.511084   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.511093   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:34.511098   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:34.511143   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:34.544234   70686 cri.go:89] found id: ""
	I0127 11:46:34.544263   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.544269   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:34.544275   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:34.544319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:34.578776   70686 cri.go:89] found id: ""
	I0127 11:46:34.578799   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.578809   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:34.578816   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:34.578871   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:34.611130   70686 cri.go:89] found id: ""
	I0127 11:46:34.611154   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.611163   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:34.611168   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:34.611225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:34.643126   70686 cri.go:89] found id: ""
	I0127 11:46:34.643153   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.643163   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:34.643171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:34.643227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:34.678033   70686 cri.go:89] found id: ""
	I0127 11:46:34.678076   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.678087   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:34.678094   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:34.678160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:34.712414   70686 cri.go:89] found id: ""
	I0127 11:46:34.712443   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.712454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:34.712461   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:34.712534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:34.745083   70686 cri.go:89] found id: ""
	I0127 11:46:34.745109   70686 logs.go:282] 0 containers: []
	W0127 11:46:34.745116   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:34.745124   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:34.745136   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:34.757666   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:34.757694   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:34.823196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:34.823218   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:34.823230   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:34.905878   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:34.905913   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:34.942463   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:34.942488   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:32.181312   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.181612   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:34.579348   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.579626   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:33.766304   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:36.265533   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:37.493333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:37.505875   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:37.505935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:37.538445   70686 cri.go:89] found id: ""
	I0127 11:46:37.538470   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.538478   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:37.538484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:37.538537   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:37.569576   70686 cri.go:89] found id: ""
	I0127 11:46:37.569607   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.569618   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:37.569625   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:37.569687   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:37.603340   70686 cri.go:89] found id: ""
	I0127 11:46:37.603366   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.603376   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:37.603383   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:37.603441   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:37.637178   70686 cri.go:89] found id: ""
	I0127 11:46:37.637211   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.637221   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:37.637230   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:37.637294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:37.669332   70686 cri.go:89] found id: ""
	I0127 11:46:37.669359   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.669367   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:37.669373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:37.669420   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:37.701983   70686 cri.go:89] found id: ""
	I0127 11:46:37.702012   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.702021   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:37.702028   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:37.702089   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:37.734833   70686 cri.go:89] found id: ""
	I0127 11:46:37.734856   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.734865   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:37.734871   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:37.734927   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:37.768113   70686 cri.go:89] found id: ""
	I0127 11:46:37.768141   70686 logs.go:282] 0 containers: []
	W0127 11:46:37.768149   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:37.768157   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:37.768167   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:37.839883   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:37.839917   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:37.876177   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:37.876210   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:37.928640   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:37.928669   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:37.942971   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:37.942995   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:38.012611   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.514324   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:40.526994   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:40.527053   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:40.561170   70686 cri.go:89] found id: ""
	I0127 11:46:40.561192   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.561200   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:40.561205   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:40.561248   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:40.597933   70686 cri.go:89] found id: ""
	I0127 11:46:40.597964   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.597973   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:40.597981   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:40.598049   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:40.633227   70686 cri.go:89] found id: ""
	I0127 11:46:40.633255   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.633263   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:40.633287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:40.633348   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:40.667332   70686 cri.go:89] found id: ""
	I0127 11:46:40.667360   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.667368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:40.667373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:40.667434   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:40.702346   70686 cri.go:89] found id: ""
	I0127 11:46:40.702372   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.702383   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:40.702391   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:40.702447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:40.733890   70686 cri.go:89] found id: ""
	I0127 11:46:40.733916   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.733924   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:40.733929   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:40.733979   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:40.766986   70686 cri.go:89] found id: ""
	I0127 11:46:40.767005   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.767011   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:40.767016   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:40.767069   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:40.809290   70686 cri.go:89] found id: ""
	I0127 11:46:40.809320   70686 logs.go:282] 0 containers: []
	W0127 11:46:40.809331   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:40.809342   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:40.809363   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:40.863970   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:40.864006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:40.886163   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:40.886188   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 11:46:36.181772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.181835   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.682630   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:39.080089   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:41.080522   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:38.766056   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:40.766734   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.264746   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	W0127 11:46:40.951248   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:40.951277   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:40.951293   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:41.025220   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:41.025251   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.562970   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:43.575475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:43.575540   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:43.614847   70686 cri.go:89] found id: ""
	I0127 11:46:43.614875   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.614885   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:43.614892   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:43.614957   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:43.651178   70686 cri.go:89] found id: ""
	I0127 11:46:43.651208   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.651219   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:43.651227   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:43.651282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:43.683752   70686 cri.go:89] found id: ""
	I0127 11:46:43.683777   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.683783   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:43.683788   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:43.683846   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:43.718384   70686 cri.go:89] found id: ""
	I0127 11:46:43.718418   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.718429   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:43.718486   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:43.718557   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:43.751566   70686 cri.go:89] found id: ""
	I0127 11:46:43.751619   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.751631   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:43.751639   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:43.751701   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:43.785338   70686 cri.go:89] found id: ""
	I0127 11:46:43.785370   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.785381   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:43.785390   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:43.785453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:43.825291   70686 cri.go:89] found id: ""
	I0127 11:46:43.825320   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.825330   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:43.825337   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:43.825397   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:43.856396   70686 cri.go:89] found id: ""
	I0127 11:46:43.856422   70686 logs.go:282] 0 containers: []
	W0127 11:46:43.856429   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:43.856437   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:43.856448   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:43.907954   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:43.907991   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:43.920963   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:43.920987   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:43.986527   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:43.986547   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:43.986562   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:44.062764   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:44.062796   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:43.181118   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.185722   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:43.080947   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.579654   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:45.265779   69396 pod_ready.go:103] pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:46.259360   69396 pod_ready.go:82] duration metric: took 4m0.000152356s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:46.259407   69396 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-75rzv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:46.259422   69396 pod_ready.go:39] duration metric: took 4m14.538674469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:46.259449   69396 kubeadm.go:597] duration metric: took 4m21.955300548s to restartPrimaryControlPlane
	W0127 11:46:46.259525   69396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:46.259559   69396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:46:46.599548   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:46.625909   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:46.625985   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:46.670285   70686 cri.go:89] found id: ""
	I0127 11:46:46.670317   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.670329   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:46.670337   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:46.670408   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:46.703591   70686 cri.go:89] found id: ""
	I0127 11:46:46.703628   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.703636   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:46.703642   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:46.703689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:46.734451   70686 cri.go:89] found id: ""
	I0127 11:46:46.734475   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.734482   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:46.734487   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:46.734539   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:46.768854   70686 cri.go:89] found id: ""
	I0127 11:46:46.768879   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.768886   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:46.768891   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:46.768937   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:46.798912   70686 cri.go:89] found id: ""
	I0127 11:46:46.798937   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.798945   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:46.798951   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:46.799009   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:46.832665   70686 cri.go:89] found id: ""
	I0127 11:46:46.832689   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.832696   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:46.832702   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:46.832751   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:46.863964   70686 cri.go:89] found id: ""
	I0127 11:46:46.863990   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.863998   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:46.864003   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:46.864064   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:46.902558   70686 cri.go:89] found id: ""
	I0127 11:46:46.902595   70686 logs.go:282] 0 containers: []
	W0127 11:46:46.902606   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:46.902617   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:46.902632   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:46.937731   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:46.937754   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:46.986804   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:46.986839   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:47.000095   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:47.000142   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:47.064072   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:47.064099   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:47.064118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:49.640691   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:49.653166   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:49.653225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:49.687904   70686 cri.go:89] found id: ""
	I0127 11:46:49.687928   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.687938   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:49.687945   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:49.688000   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:49.725500   70686 cri.go:89] found id: ""
	I0127 11:46:49.725528   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.725537   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:49.725549   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:49.725610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:49.757793   70686 cri.go:89] found id: ""
	I0127 11:46:49.757823   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.757834   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:49.757841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:49.757901   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:49.789916   70686 cri.go:89] found id: ""
	I0127 11:46:49.789945   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.789955   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:49.789962   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:49.790020   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:49.821431   70686 cri.go:89] found id: ""
	I0127 11:46:49.821461   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.821472   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:49.821479   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:49.821541   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:49.853511   70686 cri.go:89] found id: ""
	I0127 11:46:49.853541   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.853548   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:49.853554   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:49.853605   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:49.887197   70686 cri.go:89] found id: ""
	I0127 11:46:49.887225   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.887232   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:49.887237   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:49.887313   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:49.920423   70686 cri.go:89] found id: ""
	I0127 11:46:49.920454   70686 logs.go:282] 0 containers: []
	W0127 11:46:49.920465   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:49.920476   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:49.920489   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:49.970455   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:49.970487   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:49.985812   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:49.985844   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:50.055494   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:50.055520   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:50.055536   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:50.134706   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:50.134743   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:47.682388   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.180618   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:48.080040   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:50.580505   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.580590   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:52.675280   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:52.690464   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:52.690545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:52.722566   70686 cri.go:89] found id: ""
	I0127 11:46:52.722600   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.722611   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:52.722621   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:52.722683   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:52.754684   70686 cri.go:89] found id: ""
	I0127 11:46:52.754710   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.754718   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:52.754723   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:52.754782   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:52.786631   70686 cri.go:89] found id: ""
	I0127 11:46:52.786659   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.786685   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:52.786691   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:52.786745   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:52.817637   70686 cri.go:89] found id: ""
	I0127 11:46:52.817664   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.817672   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:52.817681   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:52.817737   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:52.853402   70686 cri.go:89] found id: ""
	I0127 11:46:52.853428   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.853437   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:52.853443   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:52.853504   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:52.893692   70686 cri.go:89] found id: ""
	I0127 11:46:52.893720   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.893727   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:52.893733   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:52.893780   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.924897   70686 cri.go:89] found id: ""
	I0127 11:46:52.924926   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.924934   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:52.924940   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:52.924988   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:52.955377   70686 cri.go:89] found id: ""
	I0127 11:46:52.955397   70686 logs.go:282] 0 containers: []
	W0127 11:46:52.955404   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:52.955412   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:52.955422   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:53.007489   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:53.007518   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:53.020482   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:53.020508   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:53.088456   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:53.088489   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:53.088503   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:53.161401   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:53.161432   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:55.698676   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:55.711047   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:55.711104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:55.741929   70686 cri.go:89] found id: ""
	I0127 11:46:55.741952   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.741960   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:55.741965   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:55.742016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:55.773353   70686 cri.go:89] found id: ""
	I0127 11:46:55.773385   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.773394   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:55.773399   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:55.773453   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:55.805262   70686 cri.go:89] found id: ""
	I0127 11:46:55.805293   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.805303   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:55.805309   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:55.805356   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:55.837444   70686 cri.go:89] found id: ""
	I0127 11:46:55.837469   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.837477   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:55.837483   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:55.837554   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:55.870483   70686 cri.go:89] found id: ""
	I0127 11:46:55.870519   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.870533   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:55.870541   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:55.870603   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:55.902327   70686 cri.go:89] found id: ""
	I0127 11:46:55.902364   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.902374   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:55.902381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:55.902448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:52.182237   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:54.680772   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:55.079977   69688 pod_ready.go:103] pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:56.573914   69688 pod_ready.go:82] duration metric: took 4m0.000313005s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" ...
	E0127 11:46:56.573939   69688 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8rmt5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:46:56.573958   69688 pod_ready.go:39] duration metric: took 4m9.537234596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:46:56.573984   69688 kubeadm.go:597] duration metric: took 4m17.786447343s to restartPrimaryControlPlane
	W0127 11:46:56.574055   69688 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:46:56.574078   69688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:46:55.936231   70686 cri.go:89] found id: ""
	I0127 11:46:55.936269   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.936279   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:55.936287   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:55.936369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:55.968008   70686 cri.go:89] found id: ""
	I0127 11:46:55.968032   70686 logs.go:282] 0 containers: []
	W0127 11:46:55.968039   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:55.968047   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:55.968057   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:56.018736   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:56.018766   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:56.031397   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:56.031423   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:56.097044   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:56.097066   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:56.097079   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:56.171821   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:56.171855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:58.715327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:46:58.728027   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:46:58.728087   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:46:58.758672   70686 cri.go:89] found id: ""
	I0127 11:46:58.758700   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.758712   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:46:58.758719   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:46:58.758786   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:46:58.790220   70686 cri.go:89] found id: ""
	I0127 11:46:58.790245   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.790255   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:46:58.790263   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:46:58.790327   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:46:58.822188   70686 cri.go:89] found id: ""
	I0127 11:46:58.822214   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.822221   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:46:58.822227   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:46:58.822273   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:46:58.863053   70686 cri.go:89] found id: ""
	I0127 11:46:58.863089   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.863096   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:46:58.863102   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:46:58.863156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:46:58.899216   70686 cri.go:89] found id: ""
	I0127 11:46:58.899259   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.899271   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:46:58.899279   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:46:58.899338   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:46:58.935392   70686 cri.go:89] found id: ""
	I0127 11:46:58.935425   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.935435   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:46:58.935441   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:46:58.935503   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:46:58.972729   70686 cri.go:89] found id: ""
	I0127 11:46:58.972759   70686 logs.go:282] 0 containers: []
	W0127 11:46:58.972767   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:46:58.972772   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:46:58.972823   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:46:59.008660   70686 cri.go:89] found id: ""
	I0127 11:46:59.008689   70686 logs.go:282] 0 containers: []
	W0127 11:46:59.008698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:46:59.008707   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:46:59.008718   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:46:59.063158   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:46:59.063199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:46:59.075767   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:46:59.075799   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:46:59.142382   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:46:59.142406   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:46:59.142421   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:46:59.223068   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:46:59.223100   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:46:56.683260   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:46:59.183917   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:01.760319   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:01.774202   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:01.774282   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:01.817355   70686 cri.go:89] found id: ""
	I0127 11:47:01.817389   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.817401   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:01.817408   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:01.817469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:01.862960   70686 cri.go:89] found id: ""
	I0127 11:47:01.862985   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.862996   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:01.863003   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:01.863065   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:01.899900   70686 cri.go:89] found id: ""
	I0127 11:47:01.899931   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.899942   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:01.899949   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:01.900014   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:01.934687   70686 cri.go:89] found id: ""
	I0127 11:47:01.934723   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.934735   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:01.934744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:01.934809   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:01.969463   70686 cri.go:89] found id: ""
	I0127 11:47:01.969490   70686 logs.go:282] 0 containers: []
	W0127 11:47:01.969501   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:01.969507   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:01.969578   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:02.000732   70686 cri.go:89] found id: ""
	I0127 11:47:02.000762   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.000772   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:02.000779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:02.000837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:02.035717   70686 cri.go:89] found id: ""
	I0127 11:47:02.035740   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.035748   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:02.035755   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:02.035799   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:02.073457   70686 cri.go:89] found id: ""
	I0127 11:47:02.073488   70686 logs.go:282] 0 containers: []
	W0127 11:47:02.073498   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:02.073506   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:02.073519   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:02.142775   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:02.142800   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:02.142819   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:02.224541   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:02.224579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:02.260807   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:02.260840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:02.315983   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:02.316017   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:04.830232   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:04.844321   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:04.844380   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:04.880946   70686 cri.go:89] found id: ""
	I0127 11:47:04.880977   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.880986   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:04.880991   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:04.881066   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:04.913741   70686 cri.go:89] found id: ""
	I0127 11:47:04.913766   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.913773   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:04.913778   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:04.913831   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:04.948526   70686 cri.go:89] found id: ""
	I0127 11:47:04.948558   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.948565   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:04.948571   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:04.948621   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:04.982076   70686 cri.go:89] found id: ""
	I0127 11:47:04.982102   70686 logs.go:282] 0 containers: []
	W0127 11:47:04.982112   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:04.982119   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:04.982181   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:05.014982   70686 cri.go:89] found id: ""
	I0127 11:47:05.015007   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.015018   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:05.015025   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:05.015111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:05.048025   70686 cri.go:89] found id: ""
	I0127 11:47:05.048054   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.048065   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:05.048073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:05.048132   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:05.078464   70686 cri.go:89] found id: ""
	I0127 11:47:05.078492   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.078502   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:05.078509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:05.078584   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:05.109525   70686 cri.go:89] found id: ""
	I0127 11:47:05.109560   70686 logs.go:282] 0 containers: []
	W0127 11:47:05.109571   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:05.109581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:05.109595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:05.157576   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:05.157608   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:05.170049   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:05.170087   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:05.239411   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:05.239433   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:05.239447   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:05.318700   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:05.318742   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:01.682086   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:04.182095   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:07.856193   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:07.870239   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:07.870310   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:07.910104   70686 cri.go:89] found id: ""
	I0127 11:47:07.910130   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.910138   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:07.910144   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:07.910189   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:07.945048   70686 cri.go:89] found id: ""
	I0127 11:47:07.945074   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.945084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:07.945092   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:07.945166   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:07.976080   70686 cri.go:89] found id: ""
	I0127 11:47:07.976111   70686 logs.go:282] 0 containers: []
	W0127 11:47:07.976122   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:07.976128   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:07.976200   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:08.013354   70686 cri.go:89] found id: ""
	I0127 11:47:08.013388   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.013400   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:08.013407   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:08.013465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:08.045589   70686 cri.go:89] found id: ""
	I0127 11:47:08.045618   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.045626   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:08.045631   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:08.045689   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:08.079539   70686 cri.go:89] found id: ""
	I0127 11:47:08.079565   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.079573   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:08.079579   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:08.079650   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:08.110343   70686 cri.go:89] found id: ""
	I0127 11:47:08.110375   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.110383   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:08.110388   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:08.110447   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:08.140367   70686 cri.go:89] found id: ""
	I0127 11:47:08.140398   70686 logs.go:282] 0 containers: []
	W0127 11:47:08.140411   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:08.140422   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:08.140436   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:08.205212   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:08.205240   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:08.205255   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:08.277925   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:08.277956   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:08.314583   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:08.314609   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:08.362779   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:08.362809   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:10.876637   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:10.890367   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:10.890448   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:10.925658   70686 cri.go:89] found id: ""
	I0127 11:47:10.925688   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.925699   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:10.925707   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:10.925763   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:06.681477   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:08.681667   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.916547   69396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.656958711s)
	I0127 11:47:13.916611   69396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:13.933947   69396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:13.945813   69396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:13.956760   69396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:13.956784   69396 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:13.956829   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:13.967874   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:13.967928   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:13.978307   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:13.988624   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:13.988681   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:14.000424   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.012062   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:14.012123   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:14.021263   69396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:14.031880   69396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:14.031940   69396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:14.043324   69396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:14.085914   69396 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:14.085997   69396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:14.183080   69396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:14.183249   69396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:14.183394   69396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:14.195440   69396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:14.197259   69396 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:14.197356   69396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:14.197854   69396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:14.198266   69396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:14.198428   69396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:14.198787   69396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:14.200947   69396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:14.201202   69396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:14.201438   69396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:14.201742   69396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:14.201820   69396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:14.201962   69396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:14.202056   69396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:14.393335   69396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:14.578877   69396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:14.683103   69396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:14.892112   69396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:15.059210   69396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:15.059802   69396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:15.062493   69396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:10.957444   70686 cri.go:89] found id: ""
	I0127 11:47:10.957478   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.957490   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:10.957498   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:10.957561   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:10.988373   70686 cri.go:89] found id: ""
	I0127 11:47:10.988401   70686 logs.go:282] 0 containers: []
	W0127 11:47:10.988412   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:10.988419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:10.988483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:11.019641   70686 cri.go:89] found id: ""
	I0127 11:47:11.019672   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.019683   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:11.019690   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:11.019747   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:11.051614   70686 cri.go:89] found id: ""
	I0127 11:47:11.051643   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.051654   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:11.051661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:11.051709   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:11.083356   70686 cri.go:89] found id: ""
	I0127 11:47:11.083386   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.083396   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:11.083404   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:11.083464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:11.115324   70686 cri.go:89] found id: ""
	I0127 11:47:11.115359   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.115370   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:11.115378   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:11.115451   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:11.150953   70686 cri.go:89] found id: ""
	I0127 11:47:11.150983   70686 logs.go:282] 0 containers: []
	W0127 11:47:11.150994   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:11.151005   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:11.151018   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:11.199824   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:11.199855   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:11.212841   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:11.212906   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:11.278680   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:11.278707   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:11.278726   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:11.356679   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:11.356719   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:13.900662   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:13.913787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:13.913849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:13.947893   70686 cri.go:89] found id: ""
	I0127 11:47:13.947922   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.947934   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:13.947943   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:13.948001   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:13.983161   70686 cri.go:89] found id: ""
	I0127 11:47:13.983190   70686 logs.go:282] 0 containers: []
	W0127 11:47:13.983201   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:13.983209   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:13.983264   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:14.022256   70686 cri.go:89] found id: ""
	I0127 11:47:14.022284   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.022295   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:14.022303   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:14.022354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:14.056796   70686 cri.go:89] found id: ""
	I0127 11:47:14.056830   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.056841   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:14.056848   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:14.056907   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:14.094914   70686 cri.go:89] found id: ""
	I0127 11:47:14.094941   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.094948   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:14.094954   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:14.095011   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:14.133436   70686 cri.go:89] found id: ""
	I0127 11:47:14.133463   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.133471   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:14.133477   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:14.133542   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:14.169031   70686 cri.go:89] found id: ""
	I0127 11:47:14.169062   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.169072   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:14.169078   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:14.169125   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:14.212411   70686 cri.go:89] found id: ""
	I0127 11:47:14.212435   70686 logs.go:282] 0 containers: []
	W0127 11:47:14.212443   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:14.212452   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:14.212463   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:14.262867   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:14.262898   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:14.275105   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:14.275131   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:14.341159   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:14.341190   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:14.341208   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:14.415317   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:14.415367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:11.180827   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:13.681189   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.682069   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:15.064304   69396 out.go:235]   - Booting up control plane ...
	I0127 11:47:15.064419   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:15.064539   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:15.064632   69396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:15.081619   69396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:15.087804   69396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:15.087864   69396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:15.215883   69396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:15.216024   69396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:15.717623   69396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.507256ms
	I0127 11:47:15.717711   69396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:20.718798   69396 kubeadm.go:310] [api-check] The API server is healthy after 5.001299318s
	I0127 11:47:20.735824   69396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:20.751647   69396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:20.776203   69396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:20.776453   69396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-273200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:20.786999   69396 kubeadm.go:310] [bootstrap-token] Using token: tjwk8y.hsba31n3brg7yicx
	I0127 11:47:16.953543   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:16.966233   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:16.966320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:17.006909   70686 cri.go:89] found id: ""
	I0127 11:47:17.006936   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.006946   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:17.006953   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:17.007008   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:17.041632   70686 cri.go:89] found id: ""
	I0127 11:47:17.041659   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.041669   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:17.041677   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:17.041731   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:17.076772   70686 cri.go:89] found id: ""
	I0127 11:47:17.076801   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.076811   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:17.076818   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:17.076870   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:17.112391   70686 cri.go:89] found id: ""
	I0127 11:47:17.112422   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.112433   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:17.112440   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:17.112573   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:17.148197   70686 cri.go:89] found id: ""
	I0127 11:47:17.148229   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.148247   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:17.148255   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:17.148320   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:17.186840   70686 cri.go:89] found id: ""
	I0127 11:47:17.186871   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.186882   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:17.186895   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:17.186953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:17.219412   70686 cri.go:89] found id: ""
	I0127 11:47:17.219443   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.219454   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:17.219463   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:17.219534   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:17.256447   70686 cri.go:89] found id: ""
	I0127 11:47:17.256478   70686 logs.go:282] 0 containers: []
	W0127 11:47:17.256488   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:17.256499   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:17.256512   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.293919   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:17.293955   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:17.342997   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:17.343028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:17.356650   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:17.356679   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:17.425809   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:17.425838   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:17.425852   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.017327   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:20.034172   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:20.034239   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:20.071873   70686 cri.go:89] found id: ""
	I0127 11:47:20.071895   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.071903   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:20.071908   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:20.071955   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:20.106387   70686 cri.go:89] found id: ""
	I0127 11:47:20.106410   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.106417   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:20.106422   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:20.106481   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:20.141095   70686 cri.go:89] found id: ""
	I0127 11:47:20.141130   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.141138   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:20.141144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:20.141194   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:20.183275   70686 cri.go:89] found id: ""
	I0127 11:47:20.183302   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.183310   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:20.183316   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:20.183373   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:20.217954   70686 cri.go:89] found id: ""
	I0127 11:47:20.217981   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.217991   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:20.217999   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:20.218061   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:20.262572   70686 cri.go:89] found id: ""
	I0127 11:47:20.262604   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.262616   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:20.262623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:20.262677   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:20.297951   70686 cri.go:89] found id: ""
	I0127 11:47:20.297982   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.297993   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:20.298000   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:20.298088   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:20.331854   70686 cri.go:89] found id: ""
	I0127 11:47:20.331891   70686 logs.go:282] 0 containers: []
	W0127 11:47:20.331901   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:20.331913   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:20.331930   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:20.387238   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:20.387274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:20.409789   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:20.409823   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:20.487425   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:20.487451   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:20.487464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:20.563923   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:20.563959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:17.682390   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.182895   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:20.788426   69396 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:20.788582   69396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:20.793089   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:20.803401   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:20.812287   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:20.816685   69396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:20.822172   69396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:21.128937   69396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:21.553347   69396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:22.127179   69396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:22.127210   69396 kubeadm.go:310] 
	I0127 11:47:22.127314   69396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:22.127342   69396 kubeadm.go:310] 
	I0127 11:47:22.127419   69396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:22.127428   69396 kubeadm.go:310] 
	I0127 11:47:22.127467   69396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:22.127532   69396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:22.127584   69396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:22.127594   69396 kubeadm.go:310] 
	I0127 11:47:22.127682   69396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:22.127691   69396 kubeadm.go:310] 
	I0127 11:47:22.127757   69396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:22.127768   69396 kubeadm.go:310] 
	I0127 11:47:22.127848   69396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:22.127969   69396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:22.128089   69396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:22.128103   69396 kubeadm.go:310] 
	I0127 11:47:22.128204   69396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:22.128331   69396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:22.128350   69396 kubeadm.go:310] 
	I0127 11:47:22.128485   69396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.128622   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:22.128658   69396 kubeadm.go:310] 	--control-plane 
	I0127 11:47:22.128669   69396 kubeadm.go:310] 
	I0127 11:47:22.128793   69396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:22.128805   69396 kubeadm.go:310] 
	I0127 11:47:22.128921   69396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjwk8y.hsba31n3brg7yicx \
	I0127 11:47:22.129015   69396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:22.129734   69396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:22.129770   69396 cni.go:84] Creating CNI manager for ""
	I0127 11:47:22.129781   69396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:22.131454   69396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:22.132751   69396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:22.143934   69396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:22.162031   69396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:22.162109   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.162131   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-273200 minikube.k8s.io/updated_at=2025_01_27T11_47_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-273200 minikube.k8s.io/primary=true
	I0127 11:47:22.357159   69396 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:22.357255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:22.858227   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.101745   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:23.115010   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:23.115068   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:23.153195   70686 cri.go:89] found id: ""
	I0127 11:47:23.153223   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.153236   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:23.153244   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:23.153311   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:23.187393   70686 cri.go:89] found id: ""
	I0127 11:47:23.187420   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.187431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:23.187437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:23.187499   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:23.220850   70686 cri.go:89] found id: ""
	I0127 11:47:23.220879   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.220888   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:23.220896   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:23.220953   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:23.256597   70686 cri.go:89] found id: ""
	I0127 11:47:23.256626   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.256636   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:23.256644   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:23.256692   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:23.296324   70686 cri.go:89] found id: ""
	I0127 11:47:23.296356   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.296366   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:23.296373   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:23.296436   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:23.335645   70686 cri.go:89] found id: ""
	I0127 11:47:23.335672   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.335681   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:23.335687   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:23.335733   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:23.366972   70686 cri.go:89] found id: ""
	I0127 11:47:23.366995   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.367003   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:23.367008   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:23.367062   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:23.405377   70686 cri.go:89] found id: ""
	I0127 11:47:23.405404   70686 logs.go:282] 0 containers: []
	W0127 11:47:23.405412   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:23.405420   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:23.405433   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:23.473871   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:23.473898   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:23.473918   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:23.548827   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:23.548868   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:23.584272   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:23.584302   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:23.645470   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:23.645517   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:22.681079   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:24.681767   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:23.357378   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:23.858261   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.358001   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:24.858052   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.358029   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:25.858255   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.357827   69396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:26.545723   69396 kubeadm.go:1113] duration metric: took 4.38367816s to wait for elevateKubeSystemPrivileges
	I0127 11:47:26.545828   69396 kubeadm.go:394] duration metric: took 5m2.297374967s to StartCluster
	I0127 11:47:26.545882   69396 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.545994   69396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:26.548122   69396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:26.548782   69396 config.go:182] Loaded profile config "no-preload-273200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:26.548545   69396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:26.548897   69396 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:26.549176   69396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-273200"
	I0127 11:47:26.549197   69396 addons.go:238] Setting addon storage-provisioner=true in "no-preload-273200"
	W0127 11:47:26.549209   69396 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:47:26.549239   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.549690   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.549730   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.549955   69396 addons.go:69] Setting default-storageclass=true in profile "no-preload-273200"
	I0127 11:47:26.549974   69396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-273200"
	I0127 11:47:26.550340   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.550368   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.550531   69396 addons.go:69] Setting metrics-server=true in profile "no-preload-273200"
	I0127 11:47:26.550551   69396 addons.go:238] Setting addon metrics-server=true in "no-preload-273200"
	W0127 11:47:26.550559   69396 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:26.550590   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550587   69396 addons.go:69] Setting dashboard=true in profile "no-preload-273200"
	I0127 11:47:26.550619   69396 addons.go:238] Setting addon dashboard=true in "no-preload-273200"
	W0127 11:47:26.550629   69396 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:26.550671   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.550795   69396 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:26.550980   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551018   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.551086   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.551125   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.552072   69396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:26.591135   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0127 11:47:26.591160   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0127 11:47:26.591337   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0127 11:47:26.591436   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0127 11:47:26.591962   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.591974   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592254   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.592532   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592551   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592661   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592682   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.592699   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.592683   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.593029   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593065   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593226   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.593239   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.593679   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593720   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.593787   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.593821   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.596147   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.600142   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.600157   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.602457   69396 addons.go:238] Setting addon default-storageclass=true in "no-preload-273200"
	W0127 11:47:26.602479   69396 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:26.602510   69396 host.go:66] Checking if "no-preload-273200" exists ...
	I0127 11:47:26.602874   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.602914   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.604120   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.608202   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.608245   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.617629   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0127 11:47:26.618396   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.618963   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.618984   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.619363   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.619536   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.621603   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.623294   69396 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:26.625658   69396 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:26.626912   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:26.626933   69396 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:26.626955   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.630583   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0127 11:47:26.630587   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 11:47:26.631073   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.631690   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.631710   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.631883   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.632167   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.632324   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.632658   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.632673   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.633439   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.633559   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.633993   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.634505   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.634533   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.634773   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.634922   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.635051   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.635188   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.636019   69396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:26.636059   69396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:26.642473   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0127 11:47:26.645166   69396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:26.646249   69396 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:26.646264   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:26.646281   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.651734   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.651803   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.651826   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.651843   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.652136   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.659702   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.659915   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.663957   69396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0127 11:47:26.664289   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665037   69396 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:26.665168   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665183   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665558   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.665749   69396 main.go:141] libmachine: Using API Version  1
	I0127 11:47:26.665761   69396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:26.665970   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.666585   69396 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:26.666886   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetState
	I0127 11:47:26.667729   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669615   69396 main.go:141] libmachine: (no-preload-273200) Calling .DriverName
	I0127 11:47:26.669619   69396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:24.171505   69688 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.597391159s)
	I0127 11:47:24.171597   69688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:47:24.187337   69688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:47:24.197062   69688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:47:24.208102   69688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:47:24.208127   69688 kubeadm.go:157] found existing configuration files:
	
	I0127 11:47:24.208176   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:47:24.223247   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:47:24.223306   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:47:24.232903   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:47:24.241163   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:47:24.241220   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:47:24.251669   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.260475   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:47:24.260534   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:47:24.269272   69688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:47:24.277509   69688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:47:24.277554   69688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:47:24.286253   69688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:47:24.435312   69688 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:47:26.669962   69396 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:26.669979   69396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:26.669998   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.670903   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:26.670919   69396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:26.670935   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHHostname
	I0127 11:47:26.675429   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678600   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678659   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHPort
	I0127 11:47:26.678709   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678726   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678749   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678771   69396 main.go:141] libmachine: (no-preload-273200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:91:77", ip: ""} in network mk-no-preload-273200: {Iface:virbr4 ExpiryTime:2025-01-27 12:41:59 +0000 UTC Type:0 Mac:52:54:00:5b:91:77 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:no-preload-273200 Clientid:01:52:54:00:5b:91:77}
	I0127 11:47:26.678781   69396 main.go:141] libmachine: (no-preload-273200) DBG | domain no-preload-273200 has defined IP address 192.168.61.181 and MAC address 52:54:00:5b:91:77 in network mk-no-preload-273200
	I0127 11:47:26.678803   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.678993   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHKeyPath
	I0127 11:47:26.679036   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679128   69396 main.go:141] libmachine: (no-preload-273200) Calling .GetSSHUsername
	I0127 11:47:26.679182   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.679386   69396 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/no-preload-273200/id_rsa Username:docker}
	I0127 11:47:26.875833   69396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:26.920571   69396 node_ready.go:35] waiting up to 6m0s for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939903   69396 node_ready.go:49] node "no-preload-273200" has status "Ready":"True"
	I0127 11:47:26.939926   69396 node_ready.go:38] duration metric: took 19.319573ms for node "no-preload-273200" to be "Ready" ...
	I0127 11:47:26.939937   69396 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:26.959191   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:27.008467   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:27.081273   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:27.081304   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:27.101527   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:27.152011   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:27.152043   69396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:27.244718   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:27.244747   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:27.252472   69396 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.252495   69396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:27.296605   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:27.313892   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:27.313920   69396 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:27.403990   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:27.404022   69396 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:27.477781   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:27.477811   69396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:27.571056   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:27.571086   69396 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:27.705284   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:27.705316   69396 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:27.789319   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:27.789349   69396 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:27.870737   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:27.870774   69396 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:27.935415   69396 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:27.935444   69396 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:27.990927   69396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:28.098209   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089707756s)
	I0127 11:47:28.098259   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098271   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098370   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098402   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098565   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098581   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098609   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098618   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098707   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098721   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.098730   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.098738   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.098839   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.098925   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.098945   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.099049   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.099059   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.099062   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.114073   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.114099   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.114382   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.114404   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.614645   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.317992457s)
	I0127 11:47:28.614719   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.614737   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.615709   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.615736   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.615759   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.615779   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:28.615792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:28.617426   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:28.617436   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:28.617454   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:28.617473   69396 addons.go:479] Verifying addon metrics-server=true in "no-preload-273200"
	I0127 11:47:28.972192   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.485321   69396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.494345914s)
	I0127 11:47:29.485395   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485413   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.485754   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.485774   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.485784   69396 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:29.485792   69396 main.go:141] libmachine: (no-preload-273200) Calling .Close
	I0127 11:47:29.486141   69396 main.go:141] libmachine: (no-preload-273200) DBG | Closing plugin on server side
	I0127 11:47:29.486164   69396 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:29.486172   69396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:29.487790   69396 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-273200 addons enable metrics-server
	
	I0127 11:47:29.489175   69396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:26.161139   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:26.175269   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:26.175344   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:26.213990   70686 cri.go:89] found id: ""
	I0127 11:47:26.214019   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.214030   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:26.214038   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:26.214099   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:26.250643   70686 cri.go:89] found id: ""
	I0127 11:47:26.250672   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.250680   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:26.250685   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:26.250749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:26.289305   70686 cri.go:89] found id: ""
	I0127 11:47:26.289327   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.289336   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:26.289343   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:26.289400   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:26.327511   70686 cri.go:89] found id: ""
	I0127 11:47:26.327546   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.327557   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:26.327564   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:26.327629   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:26.363961   70686 cri.go:89] found id: ""
	I0127 11:47:26.363996   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.364011   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:26.364019   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:26.364076   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:26.403759   70686 cri.go:89] found id: ""
	I0127 11:47:26.403782   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.403793   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:26.403801   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:26.403862   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:26.443391   70686 cri.go:89] found id: ""
	I0127 11:47:26.443419   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.443429   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:26.443436   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:26.443496   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:26.486086   70686 cri.go:89] found id: ""
	I0127 11:47:26.486189   70686 logs.go:282] 0 containers: []
	W0127 11:47:26.486219   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:26.486255   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:26.486290   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:26.537761   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:26.537789   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:26.624695   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:26.624728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:26.644616   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:26.644646   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:26.732815   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:26.732835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:26.732846   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:29.315744   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:29.331345   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:29.331421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:29.366233   70686 cri.go:89] found id: ""
	I0127 11:47:29.366264   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.366276   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:29.366283   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:29.366355   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:29.402282   70686 cri.go:89] found id: ""
	I0127 11:47:29.402310   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.402320   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:29.402327   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:29.402389   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:29.438381   70686 cri.go:89] found id: ""
	I0127 11:47:29.438409   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.438420   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:29.438429   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:29.438483   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:29.473386   70686 cri.go:89] found id: ""
	I0127 11:47:29.473408   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.473414   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:29.473419   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:29.473465   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:29.506930   70686 cri.go:89] found id: ""
	I0127 11:47:29.506954   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.506961   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:29.506966   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:29.507025   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:29.542763   70686 cri.go:89] found id: ""
	I0127 11:47:29.542786   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.542794   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:29.542802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:29.542861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:29.578067   70686 cri.go:89] found id: ""
	I0127 11:47:29.578097   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.578108   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:29.578117   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:29.578176   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:29.613659   70686 cri.go:89] found id: ""
	I0127 11:47:29.613687   70686 logs.go:282] 0 containers: []
	W0127 11:47:29.613698   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:29.613709   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:29.613728   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:29.659409   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:29.659446   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:29.718837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:29.718870   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:29.735558   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:29.735583   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:29.839999   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:29.840025   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:29.840043   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:26.683550   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:29.183056   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:32.285356   69688 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:47:32.285447   69688 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:47:32.285583   69688 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:47:32.285722   69688 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:47:32.285858   69688 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:47:32.285955   69688 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:47:32.287165   69688 out.go:235]   - Generating certificates and keys ...
	I0127 11:47:32.287240   69688 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:47:32.287301   69688 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:47:32.287411   69688 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:47:32.287505   69688 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:47:32.287574   69688 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:47:32.287659   69688 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:47:32.287773   69688 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:47:32.287869   69688 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:47:32.287947   69688 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:47:32.288020   69688 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:47:32.288054   69688 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:47:32.288102   69688 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:47:32.288149   69688 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:47:32.288202   69688 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:47:32.288265   69688 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:47:32.288341   69688 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:47:32.288412   69688 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:47:32.288506   69688 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:47:32.288612   69688 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:47:32.290658   69688 out.go:235]   - Booting up control plane ...
	I0127 11:47:32.290754   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:47:32.290861   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:47:32.290938   69688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:47:32.291060   69688 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:47:32.291188   69688 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:47:32.291240   69688 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:47:32.291426   69688 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:47:32.291585   69688 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:47:32.291703   69688 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.921879ms
	I0127 11:47:32.291805   69688 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:47:32.291896   69688 kubeadm.go:310] [api-check] The API server is healthy after 5.007975802s
	I0127 11:47:32.292039   69688 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:47:32.292235   69688 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:47:32.292322   69688 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:47:32.292582   69688 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-986409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:47:32.292672   69688 kubeadm.go:310] [bootstrap-token] Using token: qkdn31.mmb2k0rafw3oyd5r
	I0127 11:47:32.293870   69688 out.go:235]   - Configuring RBAC rules ...
	I0127 11:47:32.294001   69688 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:47:32.294069   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:47:32.294179   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:47:32.294287   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:47:32.294412   69688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:47:32.294512   69688 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:47:32.294620   69688 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:47:32.294658   69688 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:47:32.294697   69688 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:47:32.294704   69688 kubeadm.go:310] 
	I0127 11:47:32.294752   69688 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:47:32.294759   69688 kubeadm.go:310] 
	I0127 11:47:32.294824   69688 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:47:32.294834   69688 kubeadm.go:310] 
	I0127 11:47:32.294869   69688 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:47:32.294927   69688 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:47:32.294970   69688 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:47:32.294976   69688 kubeadm.go:310] 
	I0127 11:47:32.295034   69688 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:47:32.295040   69688 kubeadm.go:310] 
	I0127 11:47:32.295078   69688 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:47:32.295084   69688 kubeadm.go:310] 
	I0127 11:47:32.295129   69688 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:47:32.295218   69688 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:47:32.295321   69688 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:47:32.295333   69688 kubeadm.go:310] 
	I0127 11:47:32.295447   69688 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:47:32.295574   69688 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:47:32.295586   69688 kubeadm.go:310] 
	I0127 11:47:32.295723   69688 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.295861   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:47:32.295885   69688 kubeadm.go:310] 	--control-plane 
	I0127 11:47:32.295888   69688 kubeadm.go:310] 
	I0127 11:47:32.295957   69688 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:47:32.295963   69688 kubeadm.go:310] 
	I0127 11:47:32.296089   69688 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qkdn31.mmb2k0rafw3oyd5r \
	I0127 11:47:32.296217   69688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:47:32.296242   69688 cni.go:84] Creating CNI manager for ""
	I0127 11:47:32.296252   69688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:47:32.297821   69688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:47:32.299024   69688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:47:32.311774   69688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:47:32.333154   69688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:47:32.333250   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:32.333317   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-986409 minikube.k8s.io/updated_at=2025_01_27T11_47_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=embed-certs-986409 minikube.k8s.io/primary=true
	I0127 11:47:32.373901   69688 ops.go:34] apiserver oom_adj: -16
	I0127 11:47:32.614706   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:29.490582   69396 addons.go:514] duration metric: took 2.941688444s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:31.467084   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.115242   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:33.614855   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.114947   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:34.615735   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.114787   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.615277   69688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:47:35.708075   69688 kubeadm.go:1113] duration metric: took 3.374895681s to wait for elevateKubeSystemPrivileges
	I0127 11:47:35.708110   69688 kubeadm.go:394] duration metric: took 4m56.964886498s to StartCluster
	I0127 11:47:35.708127   69688 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.708206   69688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:47:35.709765   69688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:47:35.710017   69688 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:47:35.710099   69688 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:47:35.710197   69688 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-986409"
	I0127 11:47:35.710208   69688 addons.go:69] Setting default-storageclass=true in profile "embed-certs-986409"
	I0127 11:47:35.710224   69688 addons.go:69] Setting dashboard=true in profile "embed-certs-986409"
	I0127 11:47:35.710231   69688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-986409"
	I0127 11:47:35.710234   69688 addons.go:238] Setting addon dashboard=true in "embed-certs-986409"
	I0127 11:47:35.710215   69688 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-986409"
	W0127 11:47:35.710294   69688 addons.go:247] addon storage-provisioner should already be in state true
	W0127 11:47:35.710246   69688 addons.go:247] addon dashboard should already be in state true
	I0127 11:47:35.710361   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.710231   69688 config.go:182] Loaded profile config "embed-certs-986409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:47:35.710232   69688 addons.go:69] Setting metrics-server=true in profile "embed-certs-986409"
	I0127 11:47:35.710835   69688 addons.go:238] Setting addon metrics-server=true in "embed-certs-986409"
	W0127 11:47:35.710848   69688 addons.go:247] addon metrics-server should already be in state true
	I0127 11:47:35.710878   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.711284   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711319   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711356   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.711379   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.711948   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.712418   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.712548   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.713403   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.713472   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.719688   69688 out.go:177] * Verifying Kubernetes components...
	I0127 11:47:35.721496   69688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:47:35.730986   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
	I0127 11:47:35.731485   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.731589   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0127 11:47:35.731973   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.731990   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732030   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732378   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.732610   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32785
	I0127 11:47:35.732868   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.732886   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.732943   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.732985   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733025   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733170   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.733387   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.733408   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.733574   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.733609   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.733744   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.734292   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.734315   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.739242   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0127 11:47:35.739695   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.740240   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.740254   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.740603   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.740797   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.744403   69688 addons.go:238] Setting addon default-storageclass=true in "embed-certs-986409"
	W0127 11:47:35.744426   69688 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:47:35.744451   69688 host.go:66] Checking if "embed-certs-986409" exists ...
	I0127 11:47:35.744823   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.744854   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.756768   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0127 11:47:35.757189   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.757717   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.757742   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.758231   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.758430   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.760526   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.762154   69688 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:47:35.763484   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:47:35.763499   69688 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:47:35.763517   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.766471   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.766836   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.766859   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.767027   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.767162   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.767269   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.767362   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.768736   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0127 11:47:35.769217   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.769830   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.769845   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.770259   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.770842   69688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:47:35.770876   69688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:47:35.773590   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0127 11:47:35.774146   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.774722   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.774738   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.774800   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0127 11:47:35.775433   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.775595   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.775820   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.776093   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.776103   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.776797   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.777045   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.777670   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.778791   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.779433   69688 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:47:35.780791   69688 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:47:35.782335   69688 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:47:32.447780   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:32.465728   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:32.465812   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:32.527859   70686 cri.go:89] found id: ""
	I0127 11:47:32.527947   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.527972   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:32.527990   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:32.528104   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:32.576073   70686 cri.go:89] found id: ""
	I0127 11:47:32.576171   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.576187   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:32.576195   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:32.576290   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:32.623076   70686 cri.go:89] found id: ""
	I0127 11:47:32.623118   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.623130   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:32.623137   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:32.623225   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:32.691228   70686 cri.go:89] found id: ""
	I0127 11:47:32.691318   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.691343   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:32.691362   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:32.691477   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:32.745780   70686 cri.go:89] found id: ""
	I0127 11:47:32.745811   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.745823   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:32.745831   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:32.745906   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:32.789692   70686 cri.go:89] found id: ""
	I0127 11:47:32.789731   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.789741   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:32.789751   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:32.789817   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:32.826257   70686 cri.go:89] found id: ""
	I0127 11:47:32.826288   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.826299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:32.826306   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:32.826368   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:32.868284   70686 cri.go:89] found id: ""
	I0127 11:47:32.868309   70686 logs.go:282] 0 containers: []
	W0127 11:47:32.868320   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:32.868332   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:32.868354   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:32.925073   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:32.925103   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:32.941771   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:32.941804   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:33.030670   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:33.030695   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:33.030706   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:33.113430   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:33.113464   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:35.663439   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:35.680531   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:35.680611   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:35.722549   70686 cri.go:89] found id: ""
	I0127 11:47:35.722571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.722581   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:35.722589   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:35.722634   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:35.788057   70686 cri.go:89] found id: ""
	I0127 11:47:35.788078   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.788084   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:35.788090   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:35.788127   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:35.833279   70686 cri.go:89] found id: ""
	I0127 11:47:35.833300   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.833308   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:35.833314   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:35.833357   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:35.874544   70686 cri.go:89] found id: ""
	I0127 11:47:35.874571   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.874582   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:35.874589   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:35.874654   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:35.915199   70686 cri.go:89] found id: ""
	I0127 11:47:35.915230   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.915242   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:35.915249   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:35.915314   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:31.183154   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:33.184826   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.682393   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.782468   69688 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:35.782484   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:47:35.782515   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.783769   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:47:35.783786   69688 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:47:35.783877   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.786270   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786826   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.786854   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.786891   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787046   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787077   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787232   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.787378   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.787671   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.787689   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.787707   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.787860   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.787992   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.788077   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.793305   69688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0127 11:47:35.793811   69688 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:47:35.794453   69688 main.go:141] libmachine: Using API Version  1
	I0127 11:47:35.794473   69688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:47:35.794772   69688 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:47:35.795062   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetState
	I0127 11:47:35.796950   69688 main.go:141] libmachine: (embed-certs-986409) Calling .DriverName
	I0127 11:47:35.797253   69688 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:35.797272   69688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:47:35.797291   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHHostname
	I0127 11:47:35.800329   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800750   69688 main.go:141] libmachine: (embed-certs-986409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:d5:0d", ip: ""} in network mk-embed-certs-986409: {Iface:virbr1 ExpiryTime:2025-01-27 12:42:23 +0000 UTC Type:0 Mac:52:54:00:59:d5:0d Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:embed-certs-986409 Clientid:01:52:54:00:59:d5:0d}
	I0127 11:47:35.800775   69688 main.go:141] libmachine: (embed-certs-986409) DBG | domain embed-certs-986409 has defined IP address 192.168.72.29 and MAC address 52:54:00:59:d5:0d in network mk-embed-certs-986409
	I0127 11:47:35.800948   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHPort
	I0127 11:47:35.801144   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHKeyPath
	I0127 11:47:35.801274   69688 main.go:141] libmachine: (embed-certs-986409) Calling .GetSSHUsername
	I0127 11:47:35.801417   69688 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/embed-certs-986409/id_rsa Username:docker}
	I0127 11:47:35.954346   69688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:47:35.990894   69688 node_ready.go:35] waiting up to 6m0s for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021695   69688 node_ready.go:49] node "embed-certs-986409" has status "Ready":"True"
	I0127 11:47:36.021724   69688 node_ready.go:38] duration metric: took 30.797887ms for node "embed-certs-986409" to be "Ready" ...
	I0127 11:47:36.021737   69688 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:36.029373   69688 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.075684   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:47:36.075765   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:47:36.118613   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:47:36.128091   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:47:36.128117   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:47:36.143161   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:47:36.143196   69688 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:47:36.167151   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:47:36.195969   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:47:36.196003   69688 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:47:36.215973   69688 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.216001   69688 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:47:36.279892   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:47:36.279930   69688 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:47:36.302557   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:47:36.356672   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:47:36.356705   69688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:47:36.403728   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:47:36.403755   69688 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:47:36.490122   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:47:36.490161   69688 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:47:36.572014   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:47:36.572085   69688 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:47:36.666239   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:47:36.666266   69688 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:47:36.784627   69688 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:36.784652   69688 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:47:36.874981   69688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:47:37.244603   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077408875s)
	I0127 11:47:37.244729   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244748   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.244744   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.126101345s)
	I0127 11:47:37.244768   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.244778   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246690   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246704   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.246699   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246729   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246739   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246747   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.246781   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.246794   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.246804   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.246812   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.247222   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247287   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.247352   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.247364   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.248606   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.248624   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281282   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.281317   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.281631   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.281653   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.281654   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:33.966528   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:35.970381   69396 pod_ready.go:103] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:36.467240   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.467270   69396 pod_ready.go:82] duration metric: took 9.508045614s for pod "coredns-668d6bf9bc-nqskc" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.467284   69396 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474274   69396 pod_ready.go:93] pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.474309   69396 pod_ready.go:82] duration metric: took 7.015963ms for pod "coredns-668d6bf9bc-qh6rg" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.474322   69396 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480897   69396 pod_ready.go:93] pod "etcd-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.480926   69396 pod_ready.go:82] duration metric: took 6.596204ms for pod "etcd-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.480938   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487288   69396 pod_ready.go:93] pod "kube-apiserver-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.487320   69396 pod_ready.go:82] duration metric: took 6.372473ms for pod "kube-apiserver-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.487332   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497692   69396 pod_ready.go:93] pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.497721   69396 pod_ready.go:82] duration metric: took 10.381356ms for pod "kube-controller-manager-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.497733   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864696   69396 pod_ready.go:93] pod "kube-proxy-mct6v" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:36.864728   69396 pod_ready.go:82] duration metric: took 366.98634ms for pod "kube-proxy-mct6v" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:36.864742   69396 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265304   69396 pod_ready.go:93] pod "kube-scheduler-no-preload-273200" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:37.265326   69396 pod_ready.go:82] duration metric: took 400.576908ms for pod "kube-scheduler-no-preload-273200" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:37.265334   69396 pod_ready.go:39] duration metric: took 10.325386118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:37.265347   69396 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:37.265391   69396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:37.284810   69396 api_server.go:72] duration metric: took 10.735955735s to wait for apiserver process to appear ...
	I0127 11:47:37.284832   69396 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:37.284859   69396 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0127 11:47:37.292026   69396 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0127 11:47:37.293646   69396 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:37.293675   69396 api_server.go:131] duration metric: took 8.835297ms to wait for apiserver health ...
	I0127 11:47:37.293685   69396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:37.469184   69396 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:37.469220   69396 system_pods.go:61] "coredns-668d6bf9bc-nqskc" [a9b24f06-5dc0-4a9e-a8f4-c6f311389c62] Running
	I0127 11:47:37.469228   69396 system_pods.go:61] "coredns-668d6bf9bc-qh6rg" [05780b99-a232-4846-a4b6-111f8d3d386e] Running
	I0127 11:47:37.469234   69396 system_pods.go:61] "etcd-no-preload-273200" [d1362a7f-ee18-4157-b8df-b9a3a9372f0a] Running
	I0127 11:47:37.469240   69396 system_pods.go:61] "kube-apiserver-no-preload-273200" [32c9d6be-2aac-475a-b7ba-0414122f7c6b] Running
	I0127 11:47:37.469247   69396 system_pods.go:61] "kube-controller-manager-no-preload-273200" [1091690b-7b66-4f8d-aa90-567ff97c5c19] Running
	I0127 11:47:37.469252   69396 system_pods.go:61] "kube-proxy-mct6v" [7cd1c7e8-827a-491e-8093-a7a3afc26544] Running
	I0127 11:47:37.469257   69396 system_pods.go:61] "kube-scheduler-no-preload-273200" [fde979de-7c70-4ef8-8d23-6ed01a30bf76] Running
	I0127 11:47:37.469265   69396 system_pods.go:61] "metrics-server-f79f97bbb-z6fn6" [8832c5ea-0c6b-4cc8-98da-d5d032ebb9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:37.469270   69396 system_pods.go:61] "storage-provisioner" [42d86701-11bb-4b1c-a522-ec9e7912d024] Running
	I0127 11:47:37.469280   69396 system_pods.go:74] duration metric: took 175.587004ms to wait for pod list to return data ...
	I0127 11:47:37.469292   69396 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:37.664628   69396 default_sa.go:45] found service account: "default"
	I0127 11:47:37.664664   69396 default_sa.go:55] duration metric: took 195.36433ms for default service account to be created ...
	I0127 11:47:37.664679   69396 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:37.868541   69396 system_pods.go:87] 9 kube-system pods found
	I0127 11:47:37.980174   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.677566724s)
	I0127 11:47:37.980228   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980244   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980560   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980582   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980592   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:37.980601   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:37.980880   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:37.980939   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:37.980966   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:37.980987   69688 addons.go:479] Verifying addon metrics-server=true in "embed-certs-986409"
	I0127 11:47:38.056288   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:38.999682   69688 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.124629898s)
	I0127 11:47:38.999752   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:38.999775   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000135   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000179   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.000185   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000205   69688 main.go:141] libmachine: Making call to close driver server
	I0127 11:47:39.000220   69688 main.go:141] libmachine: (embed-certs-986409) Calling .Close
	I0127 11:47:39.000492   69688 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:47:39.000493   69688 main.go:141] libmachine: (embed-certs-986409) DBG | Closing plugin on server side
	I0127 11:47:39.000507   69688 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:47:39.002275   69688 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-986409 addons enable metrics-server
	
	I0127 11:47:39.003930   69688 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:47:35.952137   70686 cri.go:89] found id: ""
	I0127 11:47:35.952165   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.952175   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:35.952183   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:35.952247   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:35.995842   70686 cri.go:89] found id: ""
	I0127 11:47:35.995870   70686 logs.go:282] 0 containers: []
	W0127 11:47:35.995882   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:35.995889   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:35.995946   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:36.045603   70686 cri.go:89] found id: ""
	I0127 11:47:36.045629   70686 logs.go:282] 0 containers: []
	W0127 11:47:36.045639   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:36.045647   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:36.045661   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:36.122919   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:36.122952   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:36.141794   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:36.141827   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:36.246196   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:36.246229   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:36.246253   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:36.363333   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:36.363378   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.920333   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:38.937466   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:38.937549   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:38.982630   70686 cri.go:89] found id: ""
	I0127 11:47:38.982660   70686 logs.go:282] 0 containers: []
	W0127 11:47:38.982672   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:38.982680   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:38.982741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:39.027004   70686 cri.go:89] found id: ""
	I0127 11:47:39.027034   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.027045   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:39.027052   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:39.027114   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:39.068819   70686 cri.go:89] found id: ""
	I0127 11:47:39.068841   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.068849   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:39.068854   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:39.068900   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:39.105724   70686 cri.go:89] found id: ""
	I0127 11:47:39.105758   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.105770   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:39.105779   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:39.105849   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:39.156156   70686 cri.go:89] found id: ""
	I0127 11:47:39.156183   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.156193   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:39.156200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:39.156257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:39.193966   70686 cri.go:89] found id: ""
	I0127 11:47:39.194002   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.194012   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:39.194021   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:39.194085   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:39.231373   70686 cri.go:89] found id: ""
	I0127 11:47:39.231398   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.231407   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:39.231415   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:39.231479   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:39.278257   70686 cri.go:89] found id: ""
	I0127 11:47:39.278288   70686 logs.go:282] 0 containers: []
	W0127 11:47:39.278299   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:39.278309   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:39.278324   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:39.356076   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:39.356128   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:39.371224   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:39.371259   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:39.446307   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:39.446334   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:39.446350   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:39.543997   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:39.544032   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:38.182709   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:40.681322   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:39.005168   69688 addons.go:514] duration metric: took 3.295073777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:47:40.536239   69688 pod_ready.go:103] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:41.539907   69688 pod_ready.go:93] pod "etcd-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:41.539938   69688 pod_ready.go:82] duration metric: took 5.510539517s for pod "etcd-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:41.539950   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046422   69688 pod_ready.go:93] pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.046450   69688 pod_ready.go:82] duration metric: took 506.490576ms for pod "kube-apiserver-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.046464   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.056999   69688 pod_ready.go:93] pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.057022   69688 pod_ready.go:82] duration metric: took 10.550413ms for pod "kube-controller-manager-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.057033   69688 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066831   69688 pod_ready.go:93] pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace has status "Ready":"True"
	I0127 11:47:42.066859   69688 pod_ready.go:82] duration metric: took 9.817042ms for pod "kube-scheduler-embed-certs-986409" in "kube-system" namespace to be "Ready" ...
	I0127 11:47:42.066869   69688 pod_ready.go:39] duration metric: took 6.045119057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:42.066885   69688 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:47:42.066943   69688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.106914   69688 api_server.go:72] duration metric: took 6.396863225s to wait for apiserver process to appear ...
	I0127 11:47:42.106942   69688 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:47:42.106967   69688 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0127 11:47:42.115128   69688 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0127 11:47:42.116724   69688 api_server.go:141] control plane version: v1.32.1
	I0127 11:47:42.116746   69688 api_server.go:131] duration metric: took 9.796211ms to wait for apiserver health ...
	I0127 11:47:42.116753   69688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:47:42.123449   69688 system_pods.go:59] 9 kube-system pods found
	I0127 11:47:42.123472   69688 system_pods.go:61] "coredns-668d6bf9bc-9sk5f" [c6114990-b336-472e-8720-1ef5ccd3b001] Running
	I0127 11:47:42.123479   69688 system_pods.go:61] "coredns-668d6bf9bc-jvx66" [7eab12a3-7303-43fc-84fa-034ced59689b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 11:47:42.123486   69688 system_pods.go:61] "etcd-embed-certs-986409" [ebdc15ff-c173-440b-ae1a-c0bc983c015b] Running
	I0127 11:47:42.123491   69688 system_pods.go:61] "kube-apiserver-embed-certs-986409" [3cbf2980-e1b2-4cff-8d01-ab9ec4806976] Running
	I0127 11:47:42.123496   69688 system_pods.go:61] "kube-controller-manager-embed-certs-986409" [642b9798-c605-4987-9d0d-2481f451d943] Running
	I0127 11:47:42.123503   69688 system_pods.go:61] "kube-proxy-b82rc" [08412bee-7381-4d81-bb67-fb39fefc29bb] Running
	I0127 11:47:42.123508   69688 system_pods.go:61] "kube-scheduler-embed-certs-986409" [7774826a-ca31-4662-94db-76f6ccbf07c3] Running
	I0127 11:47:42.123516   69688 system_pods.go:61] "metrics-server-f79f97bbb-pjkmz" [4828c28f-5ef4-48ea-9360-151007c2d9be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:47:42.123522   69688 system_pods.go:61] "storage-provisioner" [df18a80b-cc75-49f1-bd1a-48bab4776d25] Running
	I0127 11:47:42.123530   69688 system_pods.go:74] duration metric: took 6.771018ms to wait for pod list to return data ...
	I0127 11:47:42.123541   69688 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:47:42.127202   69688 default_sa.go:45] found service account: "default"
	I0127 11:47:42.127219   69688 default_sa.go:55] duration metric: took 3.6724ms for default service account to be created ...
	I0127 11:47:42.127227   69688 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:47:42.139808   69688 system_pods.go:87] 9 kube-system pods found
	I0127 11:47:42.081513   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:42.095014   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:42.095074   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:42.130635   70686 cri.go:89] found id: ""
	I0127 11:47:42.130660   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.130670   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:42.130677   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:42.130741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:42.169363   70686 cri.go:89] found id: ""
	I0127 11:47:42.169394   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.169405   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:42.169415   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:42.169475   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:42.213803   70686 cri.go:89] found id: ""
	I0127 11:47:42.213831   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.213839   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:42.213849   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:42.213911   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:42.249475   70686 cri.go:89] found id: ""
	I0127 11:47:42.249505   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.249516   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:42.249524   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:42.249719   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:42.297727   70686 cri.go:89] found id: ""
	I0127 11:47:42.297753   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.297765   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:42.297770   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:42.297822   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:42.340478   70686 cri.go:89] found id: ""
	I0127 11:47:42.340503   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.340513   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:42.340520   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:42.340580   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:42.372922   70686 cri.go:89] found id: ""
	I0127 11:47:42.372952   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.372963   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:42.372971   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:42.373029   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:42.407938   70686 cri.go:89] found id: ""
	I0127 11:47:42.407967   70686 logs.go:282] 0 containers: []
	W0127 11:47:42.407978   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:42.407989   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:42.408005   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:42.484491   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:42.484530   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:42.484553   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:42.579113   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:42.579152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:42.624076   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:42.624105   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:42.679902   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:42.679934   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.194468   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:45.207509   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:45.207572   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:45.239999   70686 cri.go:89] found id: ""
	I0127 11:47:45.240028   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.240039   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:45.240046   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:45.240098   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:45.273395   70686 cri.go:89] found id: ""
	I0127 11:47:45.273422   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.273431   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:45.273437   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:45.273495   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:45.311168   70686 cri.go:89] found id: ""
	I0127 11:47:45.311202   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.311212   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:45.311220   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:45.311284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:45.349465   70686 cri.go:89] found id: ""
	I0127 11:47:45.349491   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.349508   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:45.349513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:45.349568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:45.385823   70686 cri.go:89] found id: ""
	I0127 11:47:45.385848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.385856   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:45.385862   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:45.385919   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:45.426563   70686 cri.go:89] found id: ""
	I0127 11:47:45.426591   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.426603   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:45.426610   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:45.426669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:45.467818   70686 cri.go:89] found id: ""
	I0127 11:47:45.467848   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.467856   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:45.467861   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:45.467913   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:45.505509   70686 cri.go:89] found id: ""
	I0127 11:47:45.505551   70686 logs.go:282] 0 containers: []
	W0127 11:47:45.505570   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:45.505581   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:45.505595   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:45.562102   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:45.562134   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:45.576502   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:45.576547   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:45.656107   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:45.656179   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:45.656200   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:45.740259   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:45.740307   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:43.182256   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:45.682893   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:48.288077   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:48.305506   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:48.305575   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:48.341384   70686 cri.go:89] found id: ""
	I0127 11:47:48.341413   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.341424   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:48.341431   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:48.341490   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:48.385225   70686 cri.go:89] found id: ""
	I0127 11:47:48.385256   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.385266   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:48.385273   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:48.385331   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:48.432004   70686 cri.go:89] found id: ""
	I0127 11:47:48.432026   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.432034   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:48.432039   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:48.432096   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:48.467009   70686 cri.go:89] found id: ""
	I0127 11:47:48.467037   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.467047   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:48.467054   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:48.467111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:48.503820   70686 cri.go:89] found id: ""
	I0127 11:47:48.503847   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.503858   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:48.503864   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:48.503909   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:48.538884   70686 cri.go:89] found id: ""
	I0127 11:47:48.538908   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.538915   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:48.538924   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:48.538983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:48.572744   70686 cri.go:89] found id: ""
	I0127 11:47:48.572773   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.572783   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:48.572791   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:48.572853   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:48.610043   70686 cri.go:89] found id: ""
	I0127 11:47:48.610076   70686 logs.go:282] 0 containers: []
	W0127 11:47:48.610086   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:48.610108   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:48.610123   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:48.683427   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:48.683468   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:48.698950   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:48.698984   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:48.771789   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:48.771819   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:48.771833   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:48.852605   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:48.852642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:48.185457   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:50.682230   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:51.390888   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:51.403787   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:51.403867   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:51.438712   70686 cri.go:89] found id: ""
	I0127 11:47:51.438739   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.438746   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:51.438752   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:51.438808   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:51.476783   70686 cri.go:89] found id: ""
	I0127 11:47:51.476811   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.476821   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:51.476829   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:51.476887   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:51.509461   70686 cri.go:89] found id: ""
	I0127 11:47:51.509505   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.509522   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:51.509534   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:51.509592   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:51.545890   70686 cri.go:89] found id: ""
	I0127 11:47:51.545918   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.545936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:51.545943   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:51.546004   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:51.582831   70686 cri.go:89] found id: ""
	I0127 11:47:51.582859   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.582868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:51.582876   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:51.582935   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:51.618841   70686 cri.go:89] found id: ""
	I0127 11:47:51.618866   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.618874   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:51.618880   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:51.618934   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:51.654004   70686 cri.go:89] found id: ""
	I0127 11:47:51.654037   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.654048   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:51.654055   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:51.654119   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:51.693492   70686 cri.go:89] found id: ""
	I0127 11:47:51.693525   70686 logs.go:282] 0 containers: []
	W0127 11:47:51.693535   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:51.693547   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:51.693561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:51.742871   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:51.742901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:51.756625   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:51.756648   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:51.818231   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:51.818258   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:51.818274   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:51.897522   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:51.897556   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.435357   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:54.447575   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:54.447662   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:54.481516   70686 cri.go:89] found id: ""
	I0127 11:47:54.481546   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.481557   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:54.481565   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:54.481628   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:54.513468   70686 cri.go:89] found id: ""
	I0127 11:47:54.513494   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.513503   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:54.513510   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:54.513564   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:54.546743   70686 cri.go:89] found id: ""
	I0127 11:47:54.546768   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.546776   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:54.546781   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:54.546837   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:54.577457   70686 cri.go:89] found id: ""
	I0127 11:47:54.577495   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.577525   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:54.577533   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:54.577604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:54.607337   70686 cri.go:89] found id: ""
	I0127 11:47:54.607366   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.607375   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:54.607381   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:54.607427   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:54.651259   70686 cri.go:89] found id: ""
	I0127 11:47:54.651290   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.651301   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:54.651308   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:54.651369   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:54.688579   70686 cri.go:89] found id: ""
	I0127 11:47:54.688604   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.688613   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:54.688619   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:54.688678   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:54.725278   70686 cri.go:89] found id: ""
	I0127 11:47:54.725322   70686 logs.go:282] 0 containers: []
	W0127 11:47:54.725341   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:54.725353   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:54.725367   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:54.791430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:54.791452   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:54.791465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:54.868163   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:54.868191   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:47:54.905354   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:54.905385   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:54.957412   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:54.957444   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:53.181163   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:55.181247   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:57.471717   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:47:57.484472   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:47:57.484545   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:47:57.515302   70686 cri.go:89] found id: ""
	I0127 11:47:57.515334   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.515345   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:47:57.515353   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:47:57.515412   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:47:57.548214   70686 cri.go:89] found id: ""
	I0127 11:47:57.548239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.548248   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:47:57.548255   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:47:57.548316   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:47:57.581598   70686 cri.go:89] found id: ""
	I0127 11:47:57.581624   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.581632   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:47:57.581637   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:47:57.581682   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:47:57.617610   70686 cri.go:89] found id: ""
	I0127 11:47:57.617642   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.617654   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:47:57.617661   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:47:57.617726   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:47:57.650213   70686 cri.go:89] found id: ""
	I0127 11:47:57.650239   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.650246   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:47:57.650252   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:47:57.650319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:47:57.688111   70686 cri.go:89] found id: ""
	I0127 11:47:57.688132   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.688142   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:47:57.688150   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:47:57.688197   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:47:57.720752   70686 cri.go:89] found id: ""
	I0127 11:47:57.720782   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.720792   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:47:57.720798   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:47:57.720845   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:47:57.751896   70686 cri.go:89] found id: ""
	I0127 11:47:57.751925   70686 logs.go:282] 0 containers: []
	W0127 11:47:57.751936   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:47:57.751946   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:47:57.751959   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:47:57.802765   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:47:57.802797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:47:57.815299   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:47:57.815323   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:47:57.878584   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:47:57.878612   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:47:57.878627   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.954926   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:47:57.954961   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.492831   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:00.505398   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:00.505458   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:00.541546   70686 cri.go:89] found id: ""
	I0127 11:48:00.541572   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.541583   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:00.541590   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:00.541658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:00.574543   70686 cri.go:89] found id: ""
	I0127 11:48:00.574575   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.574585   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:00.574596   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:00.574658   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:00.607826   70686 cri.go:89] found id: ""
	I0127 11:48:00.607855   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.607865   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:00.607872   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:00.607931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:00.642893   70686 cri.go:89] found id: ""
	I0127 11:48:00.642925   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.642936   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:00.642944   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:00.642997   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:00.675525   70686 cri.go:89] found id: ""
	I0127 11:48:00.675549   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.675557   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:00.675563   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:00.675642   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:00.708878   70686 cri.go:89] found id: ""
	I0127 11:48:00.708913   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.708921   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:00.708926   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:00.708971   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:00.740471   70686 cri.go:89] found id: ""
	I0127 11:48:00.740505   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.740512   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:00.740518   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:00.740568   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:00.776050   70686 cri.go:89] found id: ""
	I0127 11:48:00.776078   70686 logs.go:282] 0 containers: []
	W0127 11:48:00.776088   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:00.776099   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:00.776112   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:00.789429   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:00.789465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:00.855134   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:00.855159   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:00.855176   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:47:57.684463   70237 pod_ready.go:103] pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace has status "Ready":"False"
	I0127 11:47:59.175404   70237 pod_ready.go:82] duration metric: took 4m0.000243677s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" ...
	E0127 11:47:59.175451   70237 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-swwsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 11:47:59.175501   70237 pod_ready.go:39] duration metric: took 4m10.536256424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:47:59.175547   70237 kubeadm.go:597] duration metric: took 4m18.512037331s to restartPrimaryControlPlane
	W0127 11:47:59.175647   70237 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:47:59.175705   70237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:00.932863   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:00.932910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:00.969770   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:00.969797   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.521596   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:03.536040   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:03.536171   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:03.571013   70686 cri.go:89] found id: ""
	I0127 11:48:03.571046   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.571057   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:03.571065   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:03.571128   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:03.605846   70686 cri.go:89] found id: ""
	I0127 11:48:03.605871   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.605879   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:03.605885   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:03.605931   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:03.641481   70686 cri.go:89] found id: ""
	I0127 11:48:03.641515   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.641524   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:03.641529   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:03.641595   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:03.676290   70686 cri.go:89] found id: ""
	I0127 11:48:03.676316   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.676326   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:03.676333   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:03.676395   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:03.713213   70686 cri.go:89] found id: ""
	I0127 11:48:03.713235   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.713243   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:03.713248   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:03.713337   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:03.746114   70686 cri.go:89] found id: ""
	I0127 11:48:03.746141   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.746151   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:03.746158   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:03.746217   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:03.780250   70686 cri.go:89] found id: ""
	I0127 11:48:03.780289   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.780299   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:03.780307   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:03.780354   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:03.817856   70686 cri.go:89] found id: ""
	I0127 11:48:03.817885   70686 logs.go:282] 0 containers: []
	W0127 11:48:03.817896   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:03.817907   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:03.817921   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:03.898728   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:03.898779   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:03.935189   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:03.935222   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:03.990903   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:03.990946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:04.004559   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:04.004584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:04.078588   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:06.578765   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:06.592073   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:06.592134   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:06.624430   70686 cri.go:89] found id: ""
	I0127 11:48:06.624465   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.624476   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:06.624484   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:06.624555   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:06.677207   70686 cri.go:89] found id: ""
	I0127 11:48:06.677244   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.677257   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:06.677264   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:06.677346   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:06.718809   70686 cri.go:89] found id: ""
	I0127 11:48:06.718833   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.718840   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:06.718845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:06.718890   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:06.754041   70686 cri.go:89] found id: ""
	I0127 11:48:06.754076   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.754089   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:06.754100   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:06.754160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:06.785748   70686 cri.go:89] found id: ""
	I0127 11:48:06.785776   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.785788   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:06.785795   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:06.785854   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:06.819849   70686 cri.go:89] found id: ""
	I0127 11:48:06.819872   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.819879   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:06.819884   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:06.819930   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:06.853347   70686 cri.go:89] found id: ""
	I0127 11:48:06.853372   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.853381   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:06.853387   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:06.853438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:06.885714   70686 cri.go:89] found id: ""
	I0127 11:48:06.885740   70686 logs.go:282] 0 containers: []
	W0127 11:48:06.885747   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:06.885755   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:06.885765   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:06.921805   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:06.921832   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:06.974607   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:06.974638   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:06.987566   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:06.987625   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:07.056872   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:07.056892   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:07.056905   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:09.644164   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:09.657446   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:09.657519   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:09.696908   70686 cri.go:89] found id: ""
	I0127 11:48:09.696940   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.696950   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:09.696957   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:09.697016   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:09.729636   70686 cri.go:89] found id: ""
	I0127 11:48:09.729665   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.729675   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:09.729682   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:09.729742   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:09.769699   70686 cri.go:89] found id: ""
	I0127 11:48:09.769726   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.769734   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:09.769740   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:09.769791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:09.801315   70686 cri.go:89] found id: ""
	I0127 11:48:09.801360   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.801368   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:09.801374   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:09.801432   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:09.831170   70686 cri.go:89] found id: ""
	I0127 11:48:09.831212   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.831221   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:09.831226   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:09.831294   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:09.862163   70686 cri.go:89] found id: ""
	I0127 11:48:09.862188   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.862194   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:09.862200   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:09.862262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:09.893097   70686 cri.go:89] found id: ""
	I0127 11:48:09.893125   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.893136   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:09.893144   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:09.893201   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:09.924215   70686 cri.go:89] found id: ""
	I0127 11:48:09.924247   70686 logs.go:282] 0 containers: []
	W0127 11:48:09.924259   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:09.924269   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:09.924286   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:09.990827   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:09.990849   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:09.990859   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:10.063335   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:10.063366   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:10.099158   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:10.099199   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:10.150789   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:10.150821   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:12.664524   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:12.677711   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:12.677791   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:12.710353   70686 cri.go:89] found id: ""
	I0127 11:48:12.710377   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.710384   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:12.710389   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:12.710443   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:12.743545   70686 cri.go:89] found id: ""
	I0127 11:48:12.743572   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.743579   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:12.743584   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:12.743646   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:12.775386   70686 cri.go:89] found id: ""
	I0127 11:48:12.775413   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.775423   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:12.775430   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:12.775488   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:12.808803   70686 cri.go:89] found id: ""
	I0127 11:48:12.808828   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.808835   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:12.808841   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:12.808898   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:12.842531   70686 cri.go:89] found id: ""
	I0127 11:48:12.842554   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.842561   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:12.842566   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:12.842610   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:12.875470   70686 cri.go:89] found id: ""
	I0127 11:48:12.875501   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.875512   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:12.875522   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:12.875579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:12.908768   70686 cri.go:89] found id: ""
	I0127 11:48:12.908790   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.908797   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:12.908802   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:12.908848   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:12.943312   70686 cri.go:89] found id: ""
	I0127 11:48:12.943340   70686 logs.go:282] 0 containers: []
	W0127 11:48:12.943348   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:12.943356   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:12.943368   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:12.995939   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:12.995971   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:13.009006   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:13.009028   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:13.097589   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:13.097607   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:13.097618   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:13.180494   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:13.180526   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:15.719725   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:15.733707   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:15.733770   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:15.771051   70686 cri.go:89] found id: ""
	I0127 11:48:15.771076   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.771086   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:15.771094   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:15.771156   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:15.803893   70686 cri.go:89] found id: ""
	I0127 11:48:15.803926   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.803938   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:15.803945   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:15.803995   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:15.840882   70686 cri.go:89] found id: ""
	I0127 11:48:15.840915   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.840927   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:15.840935   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:15.840993   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:15.879101   70686 cri.go:89] found id: ""
	I0127 11:48:15.879132   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.879144   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:15.879165   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:15.879227   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:15.910272   70686 cri.go:89] found id: ""
	I0127 11:48:15.910306   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.910317   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:15.910325   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:15.910385   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:15.942060   70686 cri.go:89] found id: ""
	I0127 11:48:15.942085   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.942093   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:15.942099   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:15.942160   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:15.975105   70686 cri.go:89] found id: ""
	I0127 11:48:15.975136   70686 logs.go:282] 0 containers: []
	W0127 11:48:15.975147   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:15.975155   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:15.975219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:16.009270   70686 cri.go:89] found id: ""
	I0127 11:48:16.009302   70686 logs.go:282] 0 containers: []
	W0127 11:48:16.009313   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:16.009323   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:16.009337   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:16.059868   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:16.059901   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:16.074089   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:16.074118   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:16.150389   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:16.150435   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:16.150450   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:16.226031   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:16.226070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:18.766131   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:18.780688   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:18.780758   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:18.827413   70686 cri.go:89] found id: ""
	I0127 11:48:18.827443   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.827454   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:18.827462   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:18.827528   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:18.890142   70686 cri.go:89] found id: ""
	I0127 11:48:18.890169   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.890179   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:18.890187   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:18.890252   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:18.921896   70686 cri.go:89] found id: ""
	I0127 11:48:18.921925   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.921933   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:18.921938   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:18.921989   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:18.956705   70686 cri.go:89] found id: ""
	I0127 11:48:18.956728   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.956736   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:18.956744   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:18.956813   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:18.989832   70686 cri.go:89] found id: ""
	I0127 11:48:18.989858   70686 logs.go:282] 0 containers: []
	W0127 11:48:18.989868   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:18.989874   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:18.989929   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:19.026132   70686 cri.go:89] found id: ""
	I0127 11:48:19.026159   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.026166   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:19.026173   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:19.026219   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:19.059138   70686 cri.go:89] found id: ""
	I0127 11:48:19.059162   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.059170   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:19.059175   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:19.059220   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:19.092018   70686 cri.go:89] found id: ""
	I0127 11:48:19.092048   70686 logs.go:282] 0 containers: []
	W0127 11:48:19.092058   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:19.092069   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:19.092085   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:19.167121   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:19.167152   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:19.205334   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:19.205364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:19.254602   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:19.254639   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:19.268979   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:19.269006   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:19.338679   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:21.839791   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:21.852667   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:21.852727   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:21.886171   70686 cri.go:89] found id: ""
	I0127 11:48:21.886197   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.886205   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:21.886210   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:21.886257   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:21.921652   70686 cri.go:89] found id: ""
	I0127 11:48:21.921679   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.921689   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:21.921696   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:21.921755   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:21.957643   70686 cri.go:89] found id: ""
	I0127 11:48:21.957670   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.957679   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:21.957686   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:21.957746   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:21.992841   70686 cri.go:89] found id: ""
	I0127 11:48:21.992871   70686 logs.go:282] 0 containers: []
	W0127 11:48:21.992881   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:21.992888   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:21.992952   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:22.028313   70686 cri.go:89] found id: ""
	I0127 11:48:22.028356   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.028365   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:22.028376   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:22.028421   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:22.063653   70686 cri.go:89] found id: ""
	I0127 11:48:22.063679   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.063686   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:22.063692   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:22.063749   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:22.095804   70686 cri.go:89] found id: ""
	I0127 11:48:22.095831   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.095839   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:22.095845   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:22.095904   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:22.128161   70686 cri.go:89] found id: ""
	I0127 11:48:22.128194   70686 logs.go:282] 0 containers: []
	W0127 11:48:22.128205   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:22.128217   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:22.128231   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:22.166325   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:22.166348   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:22.216549   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:22.216599   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:22.229716   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:22.229745   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:22.295957   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:22.295985   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:22.296000   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:24.876705   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:24.889666   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:24.889741   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:24.923871   70686 cri.go:89] found id: ""
	I0127 11:48:24.923904   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.923915   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:24.923923   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:24.923983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:24.959046   70686 cri.go:89] found id: ""
	I0127 11:48:24.959078   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.959090   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:24.959098   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:24.959151   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:24.994427   70686 cri.go:89] found id: ""
	I0127 11:48:24.994457   70686 logs.go:282] 0 containers: []
	W0127 11:48:24.994468   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:24.994475   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:24.994535   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:25.026201   70686 cri.go:89] found id: ""
	I0127 11:48:25.026230   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.026239   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:25.026247   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:25.026309   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:25.058228   70686 cri.go:89] found id: ""
	I0127 11:48:25.058250   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.058258   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:25.058263   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:25.058319   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:25.089128   70686 cri.go:89] found id: ""
	I0127 11:48:25.089165   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.089176   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:25.089186   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:25.089262   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:25.124376   70686 cri.go:89] found id: ""
	I0127 11:48:25.124404   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.124411   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:25.124417   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:25.124464   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:25.157926   70686 cri.go:89] found id: ""
	I0127 11:48:25.157959   70686 logs.go:282] 0 containers: []
	W0127 11:48:25.157970   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:25.157982   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:25.157996   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:25.208316   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:25.208347   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:25.223045   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:25.223070   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:25.289735   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:25.289757   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:25.289771   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:25.376030   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:25.376082   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:27.914186   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:27.926651   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:27.926716   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:27.965235   70686 cri.go:89] found id: ""
	I0127 11:48:27.965263   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.965273   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:27.965279   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:27.965334   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:27.999266   70686 cri.go:89] found id: ""
	I0127 11:48:27.999301   70686 logs.go:282] 0 containers: []
	W0127 11:48:27.999312   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:27.999320   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:27.999377   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:28.031394   70686 cri.go:89] found id: ""
	I0127 11:48:28.031442   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.031454   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:28.031462   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:28.031524   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:28.063460   70686 cri.go:89] found id: ""
	I0127 11:48:28.063494   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.063505   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:28.063513   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:28.063579   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:28.098052   70686 cri.go:89] found id: ""
	I0127 11:48:28.098075   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.098082   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:28.098087   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:28.098133   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:28.132561   70686 cri.go:89] found id: ""
	I0127 11:48:28.132592   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.132601   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:28.132609   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:28.132668   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:28.173166   70686 cri.go:89] found id: ""
	I0127 11:48:28.173197   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.173206   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:28.173212   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:28.173263   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:28.207104   70686 cri.go:89] found id: ""
	I0127 11:48:28.207134   70686 logs.go:282] 0 containers: []
	W0127 11:48:28.207144   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:28.207155   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:28.207169   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:28.255860   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:28.255897   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:28.270823   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:28.270849   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:28.340536   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:28.340562   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:28.340577   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:28.424875   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:28.424910   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:26.746474   70237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.570747097s)
	I0127 11:48:26.746545   70237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:26.762637   70237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:26.776063   70237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:26.789742   70237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:26.789766   70237 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:26.789818   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 11:48:26.800449   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:26.800505   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:26.818106   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 11:48:26.827104   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:26.827167   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:26.844719   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.861215   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:26.861299   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:26.877899   70237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 11:48:26.886638   70237 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:26.886691   70237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:26.895347   70237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:27.038970   70237 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:34.381659   70237 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:48:34.381747   70237 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:48:34.381834   70237 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:48:34.382006   70237 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:48:34.382166   70237 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:48:34.382273   70237 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:48:34.384155   70237 out.go:235]   - Generating certificates and keys ...
	I0127 11:48:34.384281   70237 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:48:34.384383   70237 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:48:34.384475   70237 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:48:34.384540   70237 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:48:34.384619   70237 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:48:34.384712   70237 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:48:34.384815   70237 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:48:34.384870   70237 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:48:34.384936   70237 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:48:34.385045   70237 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:48:34.385125   70237 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:48:34.385205   70237 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:48:34.385276   70237 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:48:34.385331   70237 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:48:34.385408   70237 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:48:34.385500   70237 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:48:34.385576   70237 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:48:34.385691   70237 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:48:34.385779   70237 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:48:34.387105   70237 out.go:235]   - Booting up control plane ...
	I0127 11:48:34.387208   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:48:34.387285   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:48:34.387359   70237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:48:34.387457   70237 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:48:34.387545   70237 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:48:34.387589   70237 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:48:34.387728   70237 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:48:34.387818   70237 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:48:34.387875   70237 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001607262s
	I0127 11:48:34.387947   70237 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:48:34.388039   70237 kubeadm.go:310] [api-check] The API server is healthy after 4.002263796s
	I0127 11:48:34.388196   70237 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:48:34.388338   70237 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:48:34.388399   70237 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:48:34.388623   70237 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-407489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:48:34.388706   70237 kubeadm.go:310] [bootstrap-token] Using token: n96bmw.dtq43nz27fzxgr8y
	I0127 11:48:34.390189   70237 out.go:235]   - Configuring RBAC rules ...
	I0127 11:48:34.390316   70237 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:48:34.390409   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:48:34.390579   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:48:34.390756   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:48:34.390876   70237 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:48:34.390986   70237 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:48:34.391159   70237 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:48:34.391231   70237 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:48:34.391299   70237 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:48:34.391310   70237 kubeadm.go:310] 
	I0127 11:48:34.391403   70237 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:48:34.391413   70237 kubeadm.go:310] 
	I0127 11:48:34.391518   70237 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:48:34.391530   70237 kubeadm.go:310] 
	I0127 11:48:34.391577   70237 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:48:34.391699   70237 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:48:34.391769   70237 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:48:34.391776   70237 kubeadm.go:310] 
	I0127 11:48:34.391868   70237 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:48:34.391882   70237 kubeadm.go:310] 
	I0127 11:48:34.391943   70237 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:48:34.391952   70237 kubeadm.go:310] 
	I0127 11:48:34.392024   70237 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:48:34.392099   70237 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:48:34.392204   70237 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:48:34.392219   70237 kubeadm.go:310] 
	I0127 11:48:34.392359   70237 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:48:34.392465   70237 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:48:34.392480   70237 kubeadm.go:310] 
	I0127 11:48:34.392616   70237 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.392829   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 \
	I0127 11:48:34.392944   70237 kubeadm.go:310] 	--control-plane 
	I0127 11:48:34.392963   70237 kubeadm.go:310] 
	I0127 11:48:34.393089   70237 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:48:34.393100   70237 kubeadm.go:310] 
	I0127 11:48:34.393184   70237 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n96bmw.dtq43nz27fzxgr8y \
	I0127 11:48:34.393325   70237 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7084aae907d2f355d0cf61bc5f6d5173282546e49cef05f3cd029f14b2feb926 
	I0127 11:48:34.393340   70237 cni.go:84] Creating CNI manager for ""
	I0127 11:48:34.393350   70237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 11:48:34.394995   70237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 11:48:30.970758   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:30.987346   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:30.987422   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:31.022870   70686 cri.go:89] found id: ""
	I0127 11:48:31.022900   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.022911   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:31.022919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:31.022980   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:31.056491   70686 cri.go:89] found id: ""
	I0127 11:48:31.056519   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.056529   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:31.056537   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:31.056593   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:31.091268   70686 cri.go:89] found id: ""
	I0127 11:48:31.091301   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.091313   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:31.091320   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:31.091378   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:31.124449   70686 cri.go:89] found id: ""
	I0127 11:48:31.124479   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.124489   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:31.124497   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:31.124565   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:31.167383   70686 cri.go:89] found id: ""
	I0127 11:48:31.167410   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.167418   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:31.167424   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:31.167473   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:31.205066   70686 cri.go:89] found id: ""
	I0127 11:48:31.205165   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.205185   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:31.205194   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:31.205265   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:31.242101   70686 cri.go:89] found id: ""
	I0127 11:48:31.242132   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.242144   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:31.242151   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:31.242208   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:31.278496   70686 cri.go:89] found id: ""
	I0127 11:48:31.278595   70686 logs.go:282] 0 containers: []
	W0127 11:48:31.278610   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:31.278622   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:31.278645   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:31.366805   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:31.366835   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:31.366851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:31.445608   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:31.445642   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:31.487502   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:31.487529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:31.566139   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:31.566171   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.080397   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:34.094121   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:34.094187   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:34.131591   70686 cri.go:89] found id: ""
	I0127 11:48:34.131635   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.131646   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:34.131654   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:34.131711   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:34.167143   70686 cri.go:89] found id: ""
	I0127 11:48:34.167175   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.167185   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:34.167192   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:34.167259   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:34.203241   70686 cri.go:89] found id: ""
	I0127 11:48:34.203270   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.203283   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:34.203290   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:34.203349   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:34.238023   70686 cri.go:89] found id: ""
	I0127 11:48:34.238053   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.238061   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:34.238067   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:34.238115   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:34.273362   70686 cri.go:89] found id: ""
	I0127 11:48:34.273388   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.273398   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:34.273406   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:34.273469   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:34.310047   70686 cri.go:89] found id: ""
	I0127 11:48:34.310073   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.310084   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:34.310092   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:34.310148   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:34.346880   70686 cri.go:89] found id: ""
	I0127 11:48:34.346914   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.346924   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:34.346932   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:34.346987   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:34.382306   70686 cri.go:89] found id: ""
	I0127 11:48:34.382327   70686 logs.go:282] 0 containers: []
	W0127 11:48:34.382339   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:34.382348   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:34.382364   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:34.494656   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:34.494697   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:34.541974   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:34.542009   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:34.619534   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:34.619584   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:34.634607   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:34.634631   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:34.705419   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:34.396212   70237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 11:48:34.408954   70237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 11:48:34.431113   70237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:48:34.431252   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:34.431257   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-407489 minikube.k8s.io/updated_at=2025_01_27T11_48_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=default-k8s-diff-port-407489 minikube.k8s.io/primary=true
	I0127 11:48:34.469468   70237 ops.go:34] apiserver oom_adj: -16
	I0127 11:48:34.666106   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.167035   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:35.667149   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.167156   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:36.666148   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.167090   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:37.667139   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.166714   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:38.666209   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.166966   70237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:48:39.353909   70237 kubeadm.go:1113] duration metric: took 4.922724686s to wait for elevateKubeSystemPrivileges
	I0127 11:48:39.353963   70237 kubeadm.go:394] duration metric: took 4m58.742572387s to StartCluster
	I0127 11:48:39.353997   70237 settings.go:142] acquiring lock: {Name:mk45ae17114e966eee31f74fd1ca7e2ef4833a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.354112   70237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:48:39.356217   70237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-18835/kubeconfig: {Name:mk59f7601a70005cfb2fc7996e118e36c422370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:48:39.356516   70237 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.69 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:48:39.356640   70237 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:48:39.356750   70237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356777   70237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356786   70237 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356793   70237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-407489"
	I0127 11:48:39.356805   70237 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356806   70237 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:39.356812   70237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-407489"
	W0127 11:48:39.356815   70237 addons.go:247] addon metrics-server should already be in state true
	W0127 11:48:39.356814   70237 addons.go:247] addon dashboard should already be in state true
	W0127 11:48:39.356785   70237 addons.go:247] addon storage-provisioner should already be in state true
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356919   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.356780   70237 config.go:182] Loaded profile config "default-k8s-diff-port-407489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:48:39.356858   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.357367   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357421   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357452   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357461   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.357470   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357481   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357489   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.357427   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.358335   70237 out.go:177] * Verifying Kubernetes components...
	I0127 11:48:39.359875   70237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:48:39.375814   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0127 11:48:39.376161   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0127 11:48:39.376320   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376584   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.376816   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376834   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.376964   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.376976   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.377329   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.377542   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.377878   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.378406   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.378448   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.378664   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0127 11:48:39.378707   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0127 11:48:39.379469   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.379520   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.380020   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.380031   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.380391   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.380901   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.380937   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.381376   70237 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-407489"
	W0127 11:48:39.381392   70237 addons.go:247] addon default-storageclass should already be in state true
	I0127 11:48:39.381420   70237 host.go:66] Checking if "default-k8s-diff-port-407489" exists ...
	I0127 11:48:39.381774   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.381828   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.382425   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.382444   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.382932   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.383472   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.383515   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.399683   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0127 11:48:39.400302   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.400882   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.400901   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.401296   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.401495   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0127 11:48:39.401654   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0127 11:48:39.401894   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.401947   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402556   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.402892   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402909   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.402980   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.402997   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.403362   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0127 11:48:39.403805   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.403823   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.404268   70237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:48:39.404296   70237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:48:39.404472   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.404848   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.404929   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.405710   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.405726   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.406261   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.406477   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.406675   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.407171   70237 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 11:48:39.408344   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.408427   70237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:48:39.409688   70237 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 11:48:39.409753   70237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 11:48:37.206052   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:37.219444   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:37.219530   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:37.254304   70686 cri.go:89] found id: ""
	I0127 11:48:37.254334   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.254342   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:37.254349   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:37.254409   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:37.291229   70686 cri.go:89] found id: ""
	I0127 11:48:37.291264   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.291276   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:37.291289   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:37.291353   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:37.329358   70686 cri.go:89] found id: ""
	I0127 11:48:37.329381   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.329389   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:37.329394   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:37.329439   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:37.368500   70686 cri.go:89] found id: ""
	I0127 11:48:37.368529   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.368537   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:37.368543   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:37.368604   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:37.400175   70686 cri.go:89] found id: ""
	I0127 11:48:37.400203   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.400213   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:37.400221   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:37.400284   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:37.432661   70686 cri.go:89] found id: ""
	I0127 11:48:37.432687   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.432697   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:37.432704   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:37.432762   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:37.464843   70686 cri.go:89] found id: ""
	I0127 11:48:37.464886   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.464897   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:37.464905   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:37.464970   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:37.501795   70686 cri.go:89] found id: ""
	I0127 11:48:37.501818   70686 logs.go:282] 0 containers: []
	W0127 11:48:37.501826   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:37.501835   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:37.501845   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:37.580256   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:37.580281   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:37.580297   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:37.658741   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:37.658790   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:37.701171   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:37.701198   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:37.761906   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:37.761941   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.280848   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:40.294890   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:40.294962   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:40.333860   70686 cri.go:89] found id: ""
	I0127 11:48:40.333885   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.333904   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:40.333919   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:40.333983   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:40.377039   70686 cri.go:89] found id: ""
	I0127 11:48:40.377072   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.377083   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:40.377093   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:40.377157   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:40.413874   70686 cri.go:89] found id: ""
	I0127 11:48:40.413899   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.413909   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:40.413915   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:40.413976   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:40.453270   70686 cri.go:89] found id: ""
	I0127 11:48:40.453302   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.453313   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:40.453322   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:40.453438   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:40.495704   70686 cri.go:89] found id: ""
	I0127 11:48:40.495739   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.495750   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:40.495759   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:40.495825   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:40.541078   70686 cri.go:89] found id: ""
	I0127 11:48:40.541117   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.541128   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:40.541135   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:40.541195   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:40.577161   70686 cri.go:89] found id: ""
	I0127 11:48:40.577190   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.577201   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:40.577207   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:40.577267   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:40.611784   70686 cri.go:89] found id: ""
	I0127 11:48:40.611815   70686 logs.go:282] 0 containers: []
	W0127 11:48:40.611825   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:40.611837   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:40.611851   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:40.627400   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:40.627429   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:40.697583   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:40.697609   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:40.697624   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:40.779493   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:40.779529   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:40.829083   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:40.829117   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:39.409927   70237 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.409949   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:48:39.409969   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410883   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 11:48:39.410891   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:48:39.410900   70237 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 11:48:39.410901   70237 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.410918   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.414712   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415032   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415363   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415380   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415508   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415557   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.415793   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.415795   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.415811   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.415965   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416023   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416188   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.416193   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416207   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.416226   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.416326   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.416464   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.416647   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.416856   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.417093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.417232   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.425335   70237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0127 11:48:39.425726   70237 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:48:39.426147   70237 main.go:141] libmachine: Using API Version  1
	I0127 11:48:39.426164   70237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:48:39.426496   70237 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:48:39.426691   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetState
	I0127 11:48:39.428519   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .DriverName
	I0127 11:48:39.428734   70237 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.428750   70237 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:48:39.428767   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHHostname
	I0127 11:48:39.431736   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.431955   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a3:a0", ip: ""} in network mk-default-k8s-diff-port-407489: {Iface:virbr2 ExpiryTime:2025-01-27 12:43:27 +0000 UTC Type:0 Mac:52:54:00:04:a3:a0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:default-k8s-diff-port-407489 Clientid:01:52:54:00:04:a3:a0}
	I0127 11:48:39.431979   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | domain default-k8s-diff-port-407489 has defined IP address 192.168.39.69 and MAC address 52:54:00:04:a3:a0 in network mk-default-k8s-diff-port-407489
	I0127 11:48:39.432148   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHPort
	I0127 11:48:39.432352   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHKeyPath
	I0127 11:48:39.432522   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .GetSSHUsername
	I0127 11:48:39.432669   70237 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/default-k8s-diff-port-407489/id_rsa Username:docker}
	I0127 11:48:39.622216   70237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:48:39.650134   70237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677286   70237 node_ready.go:49] node "default-k8s-diff-port-407489" has status "Ready":"True"
	I0127 11:48:39.677309   70237 node_ready.go:38] duration metric: took 27.135622ms for node "default-k8s-diff-port-407489" to be "Ready" ...
	I0127 11:48:39.677318   70237 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:39.687667   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:39.731665   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:48:39.746831   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:48:39.793916   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 11:48:39.793939   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 11:48:39.875140   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 11:48:39.875167   70237 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 11:48:39.930947   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:48:39.930970   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 11:48:39.943793   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 11:48:39.943816   70237 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 11:48:39.993962   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 11:48:39.993993   70237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 11:48:40.041925   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:48:40.041962   70237 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:48:40.045715   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 11:48:40.045733   70237 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 11:48:40.168240   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 11:48:40.168261   70237 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 11:48:40.170308   70237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.170329   70237 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:48:40.222208   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 11:48:40.222229   70237 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 11:48:40.226028   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:48:40.312875   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 11:48:40.312990   70237 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 11:48:40.389058   70237 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.389088   70237 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 11:48:40.437979   70237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 11:48:40.764016   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017148966s)
	I0127 11:48:40.764080   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764093   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764098   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032393238s)
	I0127 11:48:40.764145   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764163   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764466   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764476   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:40.764483   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764520   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764535   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764525   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.764555   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764564   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.764785   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764804   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.764924   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.764937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:40.781921   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:40.781947   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:40.782236   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:40.782254   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294495   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.068429548s)
	I0127 11:48:41.294547   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294560   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.294909   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.294914   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.294937   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.294945   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.294952   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.295173   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.295220   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.295238   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.295255   70237 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-407489"
	I0127 11:48:41.723523   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:41.929362   70237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.491326001s)
	I0127 11:48:41.929422   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929437   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.929779   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.929797   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.929815   70237 main.go:141] libmachine: Making call to close driver server
	I0127 11:48:41.929825   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) Calling .Close
	I0127 11:48:41.930103   70237 main.go:141] libmachine: Successfully made call to close driver server
	I0127 11:48:41.930125   70237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 11:48:41.930151   70237 main.go:141] libmachine: (default-k8s-diff-port-407489) DBG | Closing plugin on server side
	I0127 11:48:41.931487   70237 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-407489 addons enable metrics-server
	
	I0127 11:48:41.933107   70237 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 11:48:43.382411   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:43.399629   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:48:43.399702   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:48:43.433083   70686 cri.go:89] found id: ""
	I0127 11:48:43.433116   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.433127   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:48:43.433134   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:48:43.433207   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:48:43.471725   70686 cri.go:89] found id: ""
	I0127 11:48:43.471756   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.471788   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:48:43.471796   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:48:43.471861   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:48:43.505911   70686 cri.go:89] found id: ""
	I0127 11:48:43.505944   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.505956   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:48:43.505964   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:48:43.506034   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:48:43.545670   70686 cri.go:89] found id: ""
	I0127 11:48:43.545705   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.545715   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:48:43.545723   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:48:43.545773   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:48:43.588086   70686 cri.go:89] found id: ""
	I0127 11:48:43.588113   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.588124   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:48:43.588131   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:48:43.588193   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:48:43.626703   70686 cri.go:89] found id: ""
	I0127 11:48:43.626739   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.626747   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:48:43.626754   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:48:43.626810   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:48:43.666123   70686 cri.go:89] found id: ""
	I0127 11:48:43.666155   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.666164   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:48:43.666171   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:48:43.666237   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:48:43.701503   70686 cri.go:89] found id: ""
	I0127 11:48:43.701527   70686 logs.go:282] 0 containers: []
	W0127 11:48:43.701537   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:48:43.701548   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:48:43.701561   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:48:43.752145   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:48:43.752177   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:48:43.766551   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:48:43.766579   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:48:43.838715   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:48:43.838740   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:48:43.838753   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:48:43.923406   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:48:43.923439   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:48:41.934427   70237 addons.go:514] duration metric: took 2.577793658s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 11:48:44.193593   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:46.470479   70686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:46.483541   70686 kubeadm.go:597] duration metric: took 4m2.154865283s to restartPrimaryControlPlane
	W0127 11:48:46.483635   70686 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 11:48:46.483664   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:48:46.956612   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:48:46.970448   70686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:48:46.979726   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:48:46.990401   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:48:46.990418   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:48:46.990456   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:48:46.999850   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:48:46.999921   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:48:47.009371   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:48:47.019126   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:48:47.019177   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:48:47.029905   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.040611   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:48:47.040690   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:48:47.051767   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:48:47.063007   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:48:47.063076   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:48:47.074431   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:48:47.304989   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:48:46.196598   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:48.696840   70237 pod_ready.go:103] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"False"
	I0127 11:48:49.199550   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.199588   70237 pod_ready.go:82] duration metric: took 9.511896787s for pod "coredns-668d6bf9bc-pd5ml" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.199600   70237 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205893   70237 pod_ready.go:93] pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.205926   70237 pod_ready.go:82] duration metric: took 6.298932ms for pod "coredns-668d6bf9bc-sdf87" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.205940   70237 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239052   70237 pod_ready.go:93] pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.239081   70237 pod_ready.go:82] duration metric: took 33.131129ms for pod "etcd-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.239094   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265456   70237 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.265491   70237 pod_ready.go:82] duration metric: took 26.386948ms for pod "kube-apiserver-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.265505   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272301   70237 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.272330   70237 pod_ready.go:82] duration metric: took 6.816295ms for pod "kube-controller-manager-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.272342   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591592   70237 pod_ready.go:93] pod "kube-proxy-26pw8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.591640   70237 pod_ready.go:82] duration metric: took 319.289955ms for pod "kube-proxy-26pw8" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.591655   70237 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991689   70237 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace has status "Ready":"True"
	I0127 11:48:49.991721   70237 pod_ready.go:82] duration metric: took 400.056967ms for pod "kube-scheduler-default-k8s-diff-port-407489" in "kube-system" namespace to be "Ready" ...
	I0127 11:48:49.991733   70237 pod_ready.go:39] duration metric: took 10.314402994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:48:49.991751   70237 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:48:49.991813   70237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:48:50.013067   70237 api_server.go:72] duration metric: took 10.656516392s to wait for apiserver process to appear ...
	I0127 11:48:50.013088   70237 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:48:50.013114   70237 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8444/healthz ...
	I0127 11:48:50.018115   70237 api_server.go:279] https://192.168.39.69:8444/healthz returned 200:
	ok
	I0127 11:48:50.019049   70237 api_server.go:141] control plane version: v1.32.1
	I0127 11:48:50.019078   70237 api_server.go:131] duration metric: took 5.982015ms to wait for apiserver health ...
	I0127 11:48:50.019088   70237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:48:50.196032   70237 system_pods.go:59] 9 kube-system pods found
	I0127 11:48:50.196064   70237 system_pods.go:61] "coredns-668d6bf9bc-pd5ml" [c33b4c24-e93a-4370-a289-6dca24315394] Running
	I0127 11:48:50.196070   70237 system_pods.go:61] "coredns-668d6bf9bc-sdf87" [30fc6237-1829-4315-b9cf-3354bd7a96a5] Running
	I0127 11:48:50.196075   70237 system_pods.go:61] "etcd-default-k8s-diff-port-407489" [d228476b-110d-4de7-9afe-08c2371bbb0e] Running
	I0127 11:48:50.196079   70237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-407489" [a059a0c6-34f1-46c3-9b67-adef842174f9] Running
	I0127 11:48:50.196083   70237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-407489" [aa65ad17-6f66-42c1-ad23-199b374d2104] Running
	I0127 11:48:50.196087   70237 system_pods.go:61] "kube-proxy-26pw8" [c3b9b1b2-6a71-4cd0-819f-5fde4e6bd510] Running
	I0127 11:48:50.196090   70237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-407489" [190cc5cb-ab22-4143-a84a-3c4d975728c3] Running
	I0127 11:48:50.196098   70237 system_pods.go:61] "metrics-server-f79f97bbb-d7r6d" [6bd8680e-8338-48a2-b29b-a913d195bc9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 11:48:50.196102   70237 system_pods.go:61] "storage-provisioner" [58b014bb-8629-4398-a2ec-6ec95fa59111] Running
	I0127 11:48:50.196111   70237 system_pods.go:74] duration metric: took 177.016669ms to wait for pod list to return data ...
	I0127 11:48:50.196118   70237 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:48:50.392617   70237 default_sa.go:45] found service account: "default"
	I0127 11:48:50.392652   70237 default_sa.go:55] duration metric: took 196.52383ms for default service account to be created ...
	I0127 11:48:50.392664   70237 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:48:50.594360   70237 system_pods.go:87] 9 kube-system pods found
	I0127 11:50:43.920463   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:50:43.920584   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:50:43.922146   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:43.922214   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:43.922320   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:43.922480   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:43.922613   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:43.922673   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:43.924430   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:43.924530   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:43.924611   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:43.924680   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:43.924766   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:43.924851   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:43.924925   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:43.924977   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:43.925025   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:43.925150   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:43.925259   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:43.925316   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:43.925398   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:43.925467   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:43.925544   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:43.925633   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:43.925704   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:43.925839   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:43.925952   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:43.926012   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:43.926098   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:43.927567   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:43.927670   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:43.927749   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:43.927813   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:43.927885   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:43.928078   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:50:43.928123   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:50:43.928184   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928340   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928398   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928569   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928631   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.928792   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.928850   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929077   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929185   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:50:43.929391   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:50:43.929402   70686 kubeadm.go:310] 
	I0127 11:50:43.929456   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:50:43.929518   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:50:43.929531   70686 kubeadm.go:310] 
	I0127 11:50:43.929584   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:50:43.929647   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:50:43.929784   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:50:43.929800   70686 kubeadm.go:310] 
	I0127 11:50:43.929915   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:50:43.929961   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:50:43.930009   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:50:43.930019   70686 kubeadm.go:310] 
	I0127 11:50:43.930137   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:50:43.930253   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:50:43.930266   70686 kubeadm.go:310] 
	I0127 11:50:43.930419   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:50:43.930528   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:50:43.930621   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:50:43.930695   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:50:43.930745   70686 kubeadm.go:310] 
	W0127 11:50:43.930804   70686 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 11:50:43.930840   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 11:50:44.381980   70686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:50:44.397504   70686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:50:44.407258   70686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:50:44.407280   70686 kubeadm.go:157] found existing configuration files:
	
	I0127 11:50:44.407331   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:50:44.416517   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:50:44.416588   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:50:44.425543   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:50:44.433996   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:50:44.434043   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:50:44.442792   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.452342   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:50:44.452410   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:50:44.462650   70686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:50:44.471925   70686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:50:44.471985   70686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:50:44.481004   70686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:50:44.552326   70686 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 11:50:44.552414   70686 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:50:44.696875   70686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:50:44.697032   70686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:50:44.697169   70686 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:50:44.872468   70686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:50:44.875109   70686 out.go:235]   - Generating certificates and keys ...
	I0127 11:50:44.875201   70686 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:50:44.875263   70686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:50:44.875350   70686 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 11:50:44.875402   70686 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 11:50:44.875466   70686 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 11:50:44.875514   70686 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 11:50:44.875570   70686 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 11:50:44.875679   70686 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 11:50:44.875792   70686 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 11:50:44.875910   70686 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 11:50:44.875976   70686 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 11:50:44.876030   70686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:50:45.015504   70686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:50:45.106020   70686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:50:45.326707   70686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:50:45.574018   70686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:50:45.595960   70686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:50:45.597194   70686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:50:45.597402   70686 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:50:45.740527   70686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:50:45.743100   70686 out.go:235]   - Booting up control plane ...
	I0127 11:50:45.743237   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:50:45.746496   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:50:45.747484   70686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:50:45.748125   70686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:50:45.750039   70686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:51:25.751949   70686 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 11:51:25.752243   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:25.752539   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:30.752865   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:30.753104   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:51:40.753548   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:51:40.753726   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:00.754215   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:00.754448   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753038   70686 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 11:52:40.753327   70686 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 11:52:40.753353   70686 kubeadm.go:310] 
	I0127 11:52:40.753414   70686 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 11:52:40.753473   70686 kubeadm.go:310] 		timed out waiting for the condition
	I0127 11:52:40.753483   70686 kubeadm.go:310] 
	I0127 11:52:40.753541   70686 kubeadm.go:310] 	This error is likely caused by:
	I0127 11:52:40.753590   70686 kubeadm.go:310] 		- The kubelet is not running
	I0127 11:52:40.753730   70686 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 11:52:40.753743   70686 kubeadm.go:310] 
	I0127 11:52:40.753898   70686 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 11:52:40.753957   70686 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 11:52:40.754014   70686 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 11:52:40.754030   70686 kubeadm.go:310] 
	I0127 11:52:40.754195   70686 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 11:52:40.754312   70686 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 11:52:40.754321   70686 kubeadm.go:310] 
	I0127 11:52:40.754453   70686 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 11:52:40.754573   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 11:52:40.754670   70686 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 11:52:40.754766   70686 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 11:52:40.754777   70686 kubeadm.go:310] 
	I0127 11:52:40.755376   70686 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:52:40.755478   70686 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 11:52:40.755572   70686 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 11:52:40.755648   70686 kubeadm.go:394] duration metric: took 7m56.47359007s to StartCluster
	I0127 11:52:40.755695   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:52:40.755757   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:52:40.792993   70686 cri.go:89] found id: ""
	I0127 11:52:40.793026   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.793045   70686 logs.go:284] No container was found matching "kube-apiserver"
	I0127 11:52:40.793055   70686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:52:40.793116   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:52:40.832368   70686 cri.go:89] found id: ""
	I0127 11:52:40.832397   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.832410   70686 logs.go:284] No container was found matching "etcd"
	I0127 11:52:40.832417   70686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:52:40.832478   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:52:40.865175   70686 cri.go:89] found id: ""
	I0127 11:52:40.865199   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.865208   70686 logs.go:284] No container was found matching "coredns"
	I0127 11:52:40.865215   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:52:40.865280   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:52:40.896556   70686 cri.go:89] found id: ""
	I0127 11:52:40.896586   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.896594   70686 logs.go:284] No container was found matching "kube-scheduler"
	I0127 11:52:40.896600   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:52:40.896648   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:52:40.928729   70686 cri.go:89] found id: ""
	I0127 11:52:40.928765   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.928777   70686 logs.go:284] No container was found matching "kube-proxy"
	I0127 11:52:40.928784   70686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:52:40.928852   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:52:40.962998   70686 cri.go:89] found id: ""
	I0127 11:52:40.963029   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.963039   70686 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 11:52:40.963053   70686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:52:40.963111   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:52:40.994577   70686 cri.go:89] found id: ""
	I0127 11:52:40.994606   70686 logs.go:282] 0 containers: []
	W0127 11:52:40.994616   70686 logs.go:284] No container was found matching "kindnet"
	I0127 11:52:40.994623   70686 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 11:52:40.994669   70686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 11:52:41.030825   70686 cri.go:89] found id: ""
	I0127 11:52:41.030861   70686 logs.go:282] 0 containers: []
	W0127 11:52:41.030872   70686 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 11:52:41.030884   70686 logs.go:123] Gathering logs for kubelet ...
	I0127 11:52:41.030900   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:52:41.084683   70686 logs.go:123] Gathering logs for dmesg ...
	I0127 11:52:41.084714   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:52:41.098908   70686 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:52:41.098946   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 11:52:41.176430   70686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 11:52:41.176453   70686 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:52:41.176465   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:52:41.290183   70686 logs.go:123] Gathering logs for container status ...
	I0127 11:52:41.290219   70686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 11:52:41.336066   70686 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 11:52:41.336124   70686 out.go:270] * 
	W0127 11:52:41.336202   70686 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.336227   70686 out.go:270] * 
	W0127 11:52:41.337558   70686 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 11:52:41.341361   70686 out.go:201] 
	W0127 11:52:41.342596   70686 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 11:52:41.342686   70686 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 11:52:41.342709   70686 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 11:52:41.344162   70686 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.772719359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979661772697603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=801bc748-9b7f-4b53-8061-f118d080d525 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.773249697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0aad5b3b-e685-4a5e-96b9-d02e5cffe28a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.773312480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0aad5b3b-e685-4a5e-96b9-d02e5cffe28a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.773351464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0aad5b3b-e685-4a5e-96b9-d02e5cffe28a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.804124602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9397c1fa-8ba9-4aef-8848-9c6e6e692bdd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.804206688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9397c1fa-8ba9-4aef-8848-9c6e6e692bdd name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.805209557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0516c5eb-4638-4f93-b78c-b216ae1cc488 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.805788977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979661805760042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0516c5eb-4638-4f93-b78c-b216ae1cc488 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.806516603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aafe49d1-ad8b-4743-b472-3c03ec11e5bb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.806568292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aafe49d1-ad8b-4743-b472-3c03ec11e5bb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.806612929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aafe49d1-ad8b-4743-b472-3c03ec11e5bb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.841226159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66ccc417-0167-42f7-81be-653d35eac7b6 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.841299551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66ccc417-0167-42f7-81be-653d35eac7b6 name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.842174349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95b90652-54ab-43cd-8f6b-7396f8472939 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.842551930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979661842534051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95b90652-54ab-43cd-8f6b-7396f8472939 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.842949708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f793969a-b1b1-4fd3-832f-579b8442734e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.842994901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f793969a-b1b1-4fd3-832f-579b8442734e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.843024924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f793969a-b1b1-4fd3-832f-579b8442734e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.872380892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85ece314-a2db-4461-81ee-1c9caf575c1c name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.872516718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85ece314-a2db-4461-81ee-1c9caf575c1c name=/runtime.v1.RuntimeService/Version
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.873538094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bb9e426-b7d6-43dc-816e-b386b5da4b53 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.873923299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737979661873902510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bb9e426-b7d6-43dc-816e-b386b5da4b53 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.874458246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=875ad57a-c412-4061-a909-72e9ab79a345 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.874520278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=875ad57a-c412-4061-a909-72e9ab79a345 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:07:41 old-k8s-version-570778 crio[639]: time="2025-01-27 12:07:41.874553601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=875ad57a-c412-4061-a909-72e9ab79a345 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 11:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049235] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.981407] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.993552] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591001] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.590314] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.056000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054815] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.178788] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.126988] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.243997] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.090921] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.064410] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.869247] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[ +12.042296] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 11:48] systemd-fstab-generator[5058]: Ignoring "noauto" option for root device
	[Jan27 11:50] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +0.066337] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:07:42 up 23 min,  0 users,  load average: 0.01, 0.06, 0.07
	Linux old-k8s-version-570778 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000cc5ef0)
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cd7ef0, 0x4f0ac20, 0xc000707680, 0x1, 0xc00009e0c0)
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0008d6380, 0xc00009e0c0)
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0005ea150, 0xc000b6de20)
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 27 12:07:37 old-k8s-version-570778 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 27 12:07:37 old-k8s-version-570778 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 12:07:37 old-k8s-version-570778 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 12:07:38 old-k8s-version-570778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 176.
	Jan 27 12:07:38 old-k8s-version-570778 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 12:07:38 old-k8s-version-570778 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 12:07:38 old-k8s-version-570778 kubelet[7184]: I0127 12:07:38.629510    7184 server.go:416] Version: v1.20.0
	Jan 27 12:07:38 old-k8s-version-570778 kubelet[7184]: I0127 12:07:38.629790    7184 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 12:07:38 old-k8s-version-570778 kubelet[7184]: I0127 12:07:38.631600    7184 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 12:07:38 old-k8s-version-570778 kubelet[7184]: W0127 12:07:38.632527    7184 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 12:07:38 old-k8s-version-570778 kubelet[7184]: I0127 12:07:38.632690    7184 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 2 (234.740789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-570778" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (356.83s)

                                                
                                    

Test pass (262/309)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.95
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 4.69
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.13
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 81.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 129.18
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 16.88
37 TestAddons/parallel/InspektorGadget 11.79
38 TestAddons/parallel/MetricsServer 5.9
40 TestAddons/parallel/CSI 46.75
41 TestAddons/parallel/Headlamp 23.01
42 TestAddons/parallel/CloudSpanner 5.79
43 TestAddons/parallel/LocalPath 57.06
44 TestAddons/parallel/NvidiaDevicePlugin 6.79
45 TestAddons/parallel/Yakd 12.42
47 TestAddons/StoppedEnableDisable 91.22
48 TestCertOptions 61.85
49 TestCertExpiration 269.58
51 TestForceSystemdFlag 72
52 TestForceSystemdEnv 40.91
54 TestKVMDriverInstallOrUpdate 3.8
58 TestErrorSpam/setup 40.42
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.69
61 TestErrorSpam/pause 1.46
62 TestErrorSpam/unpause 1.65
63 TestErrorSpam/stop 4.74
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.4
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.57
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
75 TestFunctional/serial/CacheCmd/cache/add_local 2.01
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 38.43
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.42
87 TestFunctional/serial/InvalidService 4.08
89 TestFunctional/parallel/ConfigCmd 0.32
90 TestFunctional/parallel/DashboardCmd 22.89
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.54
93 TestFunctional/parallel/StatusCmd 0.75
97 TestFunctional/parallel/ServiceCmdConnect 7.57
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 47.57
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.42
103 TestFunctional/parallel/MySQL 25.28
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.48
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.33
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.2
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.83
122 TestFunctional/parallel/ImageCommands/ImageListShort 1.16
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.7
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.9
127 TestFunctional/parallel/ImageCommands/Setup 1.52
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.6
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
135 TestFunctional/parallel/ServiceCmd/List 0.25
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
139 TestFunctional/parallel/ServiceCmd/Format 0.39
140 TestFunctional/parallel/ProfileCmd/profile_list 0.51
141 TestFunctional/parallel/ServiceCmd/URL 0.35
142 TestFunctional/parallel/MountCmd/any-port 10.68
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
153 TestFunctional/parallel/MountCmd/specific-port 1.6
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.01
161 TestMultiControlPlane/serial/StartCluster 194.38
162 TestMultiControlPlane/serial/DeployApp 5.91
163 TestMultiControlPlane/serial/PingHostFromPods 1.12
164 TestMultiControlPlane/serial/AddWorkerNode 57.24
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
167 TestMultiControlPlane/serial/CopyFile 12.4
168 TestMultiControlPlane/serial/StopSecondaryNode 91.58
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.62
170 TestMultiControlPlane/serial/RestartSecondaryNode 47.85
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 434.18
173 TestMultiControlPlane/serial/DeleteSecondaryNode 17.85
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
175 TestMultiControlPlane/serial/StopCluster 272.89
176 TestMultiControlPlane/serial/RestartCluster 117.19
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
178 TestMultiControlPlane/serial/AddSecondaryNode 76.2
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
183 TestJSONOutput/start/Command 78.74
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.67
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.62
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.33
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.19
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 83.69
215 TestMountStart/serial/StartWithMountFirst 26.62
216 TestMountStart/serial/VerifyMountFirst 0.36
217 TestMountStart/serial/StartWithMountSecond 26.78
218 TestMountStart/serial/VerifyMountSecond 0.35
219 TestMountStart/serial/DeleteFirst 0.68
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.27
222 TestMountStart/serial/RestartStopped 22.17
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 143.45
227 TestMultiNode/serial/DeployApp2Nodes 5.09
228 TestMultiNode/serial/PingHostFrom2Pods 0.75
229 TestMultiNode/serial/AddNode 49.66
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.55
232 TestMultiNode/serial/CopyFile 6.88
233 TestMultiNode/serial/StopNode 2.18
234 TestMultiNode/serial/StartAfterStop 42.21
235 TestMultiNode/serial/RestartKeepsNodes 327.79
236 TestMultiNode/serial/DeleteNode 2.69
237 TestMultiNode/serial/StopMultiNode 182.03
238 TestMultiNode/serial/RestartMultiNode 95.2
239 TestMultiNode/serial/ValidateNameConflict 43.28
246 TestScheduledStopUnix 113.91
250 TestRunningBinaryUpgrade 186.15
255 TestPause/serial/Start 102.61
256 TestStoppedBinaryUpgrade/Setup 0.53
257 TestStoppedBinaryUpgrade/Upgrade 233.3
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
268 TestNoKubernetes/serial/StartWithK8s 70.85
276 TestNetworkPlugins/group/false 3.36
280 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
281 TestNoKubernetes/serial/StartWithStopK8s 42.3
282 TestNoKubernetes/serial/Start 27.37
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
284 TestNoKubernetes/serial/ProfileList 1.34
285 TestNoKubernetes/serial/Stop 1.29
286 TestNoKubernetes/serial/StartNoArgs 42.72
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
291 TestStartStop/group/no-preload/serial/FirstStart 140.99
293 TestStartStop/group/embed-certs/serial/FirstStart 54.93
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.73
296 TestStartStop/group/no-preload/serial/DeployApp 10.33
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
298 TestStartStop/group/no-preload/serial/Stop 91.02
299 TestStartStop/group/embed-certs/serial/DeployApp 9.28
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
301 TestStartStop/group/embed-certs/serial/Stop 91.16
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.15
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/old-k8s-version/serial/Stop 2.29
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/newest-cni/serial/FirstStart 48.53
320 TestNetworkPlugins/group/auto/Start 79.2
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
323 TestStartStop/group/newest-cni/serial/Stop 11.33
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/newest-cni/serial/SecondStart 34.66
326 TestNetworkPlugins/group/auto/KubeletFlags 0.23
327 TestNetworkPlugins/group/auto/NetCatPod 13.27
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
331 TestStartStop/group/newest-cni/serial/Pause 2.38
332 TestNetworkPlugins/group/auto/DNS 0.17
333 TestNetworkPlugins/group/auto/Localhost 0.13
334 TestNetworkPlugins/group/auto/HairPin 0.13
335 TestNetworkPlugins/group/kindnet/Start 59.51
336 TestNetworkPlugins/group/calico/Start 91.89
337 TestNetworkPlugins/group/custom-flannel/Start 100.93
338 TestNetworkPlugins/group/enable-default-cni/Start 98.15
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
341 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
342 TestNetworkPlugins/group/kindnet/DNS 0.25
343 TestNetworkPlugins/group/kindnet/Localhost 0.19
344 TestNetworkPlugins/group/kindnet/HairPin 0.16
345 TestNetworkPlugins/group/flannel/Start 76.48
346 TestNetworkPlugins/group/calico/ControllerPod 6.01
347 TestNetworkPlugins/group/calico/KubeletFlags 0.24
348 TestNetworkPlugins/group/calico/NetCatPod 11.25
349 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
350 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
351 TestNetworkPlugins/group/calico/DNS 0.16
352 TestNetworkPlugins/group/calico/Localhost 0.11
353 TestNetworkPlugins/group/calico/HairPin 0.11
354 TestNetworkPlugins/group/custom-flannel/DNS 0.17
355 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
356 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
359 TestNetworkPlugins/group/bridge/Start 80.35
360 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
361 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
362 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
365 TestNetworkPlugins/group/flannel/NetCatPod 10.25
366 TestNetworkPlugins/group/flannel/DNS 0.15
367 TestNetworkPlugins/group/flannel/Localhost 0.16
368 TestNetworkPlugins/group/flannel/HairPin 0.11
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
370 TestNetworkPlugins/group/bridge/NetCatPod 9.22
371 TestNetworkPlugins/group/bridge/DNS 0.14
372 TestNetworkPlugins/group/bridge/Localhost 0.1
373 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (9.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-223031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-223031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.948526789s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 10:32:10.919070   26072 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 10:32:10.919173   26072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-223031
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-223031: exit status 85 (57.974174ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-223031 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |          |
	|         | -p download-only-223031        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:32:01
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:32:01.012665   26084 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:32:01.012917   26084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:01.012927   26084 out.go:358] Setting ErrFile to fd 2...
	I0127 10:32:01.012934   26084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:01.013110   26084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	W0127 10:32:01.013267   26084 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20319-18835/.minikube/config/config.json: open /home/jenkins/minikube-integration/20319-18835/.minikube/config/config.json: no such file or directory
	I0127 10:32:01.013850   26084 out.go:352] Setting JSON to true
	I0127 10:32:01.014718   26084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4421,"bootTime":1737969500,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:32:01.014822   26084 start.go:139] virtualization: kvm guest
	I0127 10:32:01.017249   26084 out.go:97] [download-only-223031] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 10:32:01.017370   26084 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 10:32:01.017408   26084 notify.go:220] Checking for updates...
	I0127 10:32:01.018903   26084 out.go:169] MINIKUBE_LOCATION=20319
	I0127 10:32:01.020412   26084 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:32:01.021711   26084 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 10:32:01.022941   26084 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:32:01.024202   26084 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 10:32:01.026547   26084 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 10:32:01.026747   26084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:32:01.138288   26084 out.go:97] Using the kvm2 driver based on user configuration
	I0127 10:32:01.138329   26084 start.go:297] selected driver: kvm2
	I0127 10:32:01.138338   26084 start.go:901] validating driver "kvm2" against <nil>
	I0127 10:32:01.138702   26084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:32:01.138822   26084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-18835/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 10:32:01.153646   26084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 10:32:01.153692   26084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 10:32:01.154205   26084 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 10:32:01.154419   26084 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 10:32:01.154451   26084 cni.go:84] Creating CNI manager for ""
	I0127 10:32:01.154514   26084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 10:32:01.154529   26084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 10:32:01.154587   26084 start.go:340] cluster config:
	{Name:download-only-223031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-223031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:32:01.154775   26084 iso.go:125] acquiring lock: {Name:mk0f883495a3f513e89101f4329551c4a3feacbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:32:01.156723   26084 out.go:97] Downloading VM boot image ...
	I0127 10:32:01.156762   26084 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 10:32:04.409754   26084 out.go:97] Starting "download-only-223031" primary control-plane node in "download-only-223031" cluster
	I0127 10:32:04.409796   26084 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 10:32:04.433308   26084 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 10:32:04.433341   26084 cache.go:56] Caching tarball of preloaded images
	I0127 10:32:04.433501   26084 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 10:32:04.435408   26084 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 10:32:04.435433   26084 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0127 10:32:04.460585   26084 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-223031 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223031"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-223031
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-113956 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-113956 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.685042462s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 10:32:15.934552   26072 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 10:32:15.934617   26072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-18835/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-113956
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-113956: exit status 85 (59.526043ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-223031 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | -p download-only-223031        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| delete  | -p download-only-223031        | download-only-223031 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC | 27 Jan 25 10:32 UTC |
	| start   | -o=json --download-only        | download-only-113956 | jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | -p download-only-113956        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:32:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:32:11.289095   26310 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:32:11.289309   26310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:11.289318   26310 out.go:358] Setting ErrFile to fd 2...
	I0127 10:32:11.289322   26310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:11.289468   26310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 10:32:11.289976   26310 out.go:352] Setting JSON to true
	I0127 10:32:11.290794   26310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4431,"bootTime":1737969500,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:32:11.290886   26310 start.go:139] virtualization: kvm guest
	I0127 10:32:11.292921   26310 out.go:97] [download-only-113956] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 10:32:11.293069   26310 notify.go:220] Checking for updates...
	I0127 10:32:11.294603   26310 out.go:169] MINIKUBE_LOCATION=20319
	I0127 10:32:11.296019   26310 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:32:11.297469   26310 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 10:32:11.298817   26310 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:32:11.300214   26310 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-113956 host does not exist
	  To start a cluster, run: "minikube start -p download-only-113956"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-113956
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 10:32:16.491466   26072 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-963021 --alsologtostderr --binary-mirror http://127.0.0.1:35307 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-963021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-963021
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (81.53s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-880670 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-880670 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.502674254s)
helpers_test.go:175: Cleaning up "offline-crio-880670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-880670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-880670: (1.025356726s)
--- PASS: TestOffline (81.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-952541
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-952541: exit status 85 (50.768579ms)

                                                
                                                
-- stdout --
	* Profile "addons-952541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-952541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-952541
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-952541: exit status 85 (52.784893ms)

                                                
                                                
-- stdout --
	* Profile "addons-952541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-952541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (129.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-952541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-952541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.178706645s)
--- PASS: TestAddons/Setup (129.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-952541 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-952541 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-952541 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-952541 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b2cc622-bc1a-4351-a130-88e84e385bef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b2cc622-bc1a-4351-a130-88e84e385bef] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003994965s
addons_test.go:633: (dbg) Run:  kubectl --context addons-952541 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-952541 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-952541 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 38.326759ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-674ww" [927afa7c-f786-406c-96cb-762022cff929] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004718133s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qh979" [cb7be8e2-7920-425c-9305-451bcbf8f865] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018944634s
addons_test.go:331: (dbg) Run:  kubectl --context addons-952541 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-952541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-952541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.040018434s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 ip
2025/01/27 10:35:00 [DEBUG] GET http://192.168.39.92:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rqw4q" [fbc892c6-dfa4-435f-90f1-92117fb21d9a] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004867489s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable inspektor-gadget --alsologtostderr -v=1: (5.78421095s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.896292ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-xb877" [8543ef6f-a533-4a1a-ba47-60aeb9645aab] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004309144s
addons_test.go:402: (dbg) Run:  kubectl --context addons-952541 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 10:35:02.781165   26072 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 10:35:02.787048   26072 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 10:35:02.787070   26072 kapi.go:107] duration metric: took 5.925166ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.934466ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-952541 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-952541 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0c2fb473-c0e4-4451-9f8a-9b59bb5bf05b] Pending
helpers_test.go:344: "task-pv-pod" [0c2fb473-c0e4-4451-9f8a-9b59bb5bf05b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0c2fb473-c0e4-4451-9f8a-9b59bb5bf05b] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.052256955s
addons_test.go:511: (dbg) Run:  kubectl --context addons-952541 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-952541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-952541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-952541 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-952541 delete pod task-pv-pod: (1.116171995s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-952541 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-952541 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-952541 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e92f3955-16cc-4f5f-b3e6-ebc3821b88b5] Pending
helpers_test.go:344: "task-pv-pod-restore" [e92f3955-16cc-4f5f-b3e6-ebc3821b88b5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e92f3955-16cc-4f5f-b3e6-ebc3821b88b5] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003232154s
addons_test.go:553: (dbg) Run:  kubectl --context addons-952541 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-952541 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-952541 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.76167711s)
--- PASS: TestAddons/parallel/CSI (46.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-952541 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-vg69f" [80c50e32-6ed2-46db-8f7a-fdf8b01bb5e9] Pending
helpers_test.go:344: "headlamp-69d78d796f-vg69f" [80c50e32-6ed2-46db-8f7a-fdf8b01bb5e9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-vg69f" [80c50e32-6ed2-46db-8f7a-fdf8b01bb5e9] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.013442483s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable headlamp --alsologtostderr -v=1: (6.147708417s)
--- PASS: TestAddons/parallel/Headlamp (23.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-ldvmk" [bb864b92-12d3-42b2-b368-d4c91fd15945] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013477411s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-952541 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-952541 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e2b590f8-79b9-41e9-8ada-331814a52b57] Pending
helpers_test.go:344: "test-local-path" [e2b590f8-79b9-41e9-8ada-331814a52b57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e2b590f8-79b9-41e9-8ada-331814a52b57] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e2b590f8-79b9-41e9-8ada-331814a52b57] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004750069s
addons_test.go:906: (dbg) Run:  kubectl --context addons-952541 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 ssh "cat /opt/local-path-provisioner/pvc-ecb4fb5f-9049-49d2-a5ca-1bdd762143ef_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-952541 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-952541 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.268056818s)
--- PASS: TestAddons/parallel/LocalPath (57.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7gblr" [47f69620-0d36-4f43-8761-a6a2f69daf77] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004535652s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.79s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-2tdl2" [30674291-245b-4881-a508-7605decfb4e1] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004760342s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-952541 addons disable yakd --alsologtostderr -v=1: (6.41277995s)
--- PASS: TestAddons/parallel/Yakd (12.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-952541
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-952541: (1m30.951434093s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-952541
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-952541
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-952541
--- PASS: TestAddons/StoppedEnableDisable (91.22s)

                                                
                                    
x
+
TestCertOptions (61.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-901069 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-901069 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.64714704s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-901069 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-901069 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-901069 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-901069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-901069
--- PASS: TestCertOptions (61.85s)

                                                
                                    
x
+
TestCertExpiration (269.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-091274 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-091274 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (50.06792945s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-091274 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0127 11:39:26.924875   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-091274 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.508083671s)
helpers_test.go:175: Cleaning up "cert-expiration-091274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-091274
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-091274: (1.004983259s)
--- PASS: TestCertExpiration (269.58s)

                                                
                                    
x
+
TestForceSystemdFlag (72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-723290 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-723290 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.794835334s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-723290 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-723290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-723290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-723290: (1.008770122s)
--- PASS: TestForceSystemdFlag (72.00s)

                                                
                                    
x
+
TestForceSystemdEnv (40.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-344999 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0127 11:34:26.924717   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-344999 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (39.260117726s)
helpers_test.go:175: Cleaning up "force-systemd-env-344999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-344999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-344999: (1.649965068s)
--- PASS: TestForceSystemdEnv (40.91s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.8s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 11:35:18.177539   26072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:35:18.177707   26072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 11:35:18.203818   26072 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 11:35:18.204170   26072 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 11:35:18.204232   26072 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3820480154/001/docker-machine-driver-kvm2
I0127 11:35:18.445161   26072 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3820480154/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0005c0520 gz:0xc0005c0528 tar:0xc0005c04d0 tar.bz2:0xc0005c04e0 tar.gz:0xc0005c04f0 tar.xz:0xc0005c0500 tar.zst:0xc0005c0510 tbz2:0xc0005c04e0 tgz:0xc0005c04f0 txz:0xc0005c0500 tzst:0xc0005c0510 xz:0xc0005c0530 zip:0xc0005c0540 zst:0xc0005c0538] Getters:map[file:0xc001cd9230 http:0xc000859130 https:0xc000859180] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 11:35:18.445210   26072 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3820480154/001/docker-machine-driver-kvm2
I0127 11:35:20.306590   26072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:35:20.306689   26072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 11:35:20.333020   26072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 11:35:20.333057   26072 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 11:35:20.333130   26072 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 11:35:20.333167   26072 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3820480154/002/docker-machine-driver-kvm2
I0127 11:35:20.391102   26072 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3820480154/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0005c0520 gz:0xc0005c0528 tar:0xc0005c04d0 tar.bz2:0xc0005c04e0 tar.gz:0xc0005c04f0 tar.xz:0xc0005c0500 tar.zst:0xc0005c0510 tbz2:0xc0005c04e0 tgz:0xc0005c04f0 txz:0xc0005c0500 tzst:0xc0005c0510 xz:0xc0005c0530 zip:0xc0005c0540 zst:0xc0005c0538] Getters:map[file:0xc001e9c760 http:0xc000bb0230 https:0xc000bb0280] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 11:35:20.391161   26072 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3820480154/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.80s)

                                                
                                    
x
+
TestErrorSpam/setup (40.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-827198 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-827198 --driver=kvm2  --container-runtime=crio
E0127 10:39:26.932855   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:26.939273   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:26.950660   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:26.972068   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:27.013442   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:27.094922   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:27.256420   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:27.578129   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:28.220183   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:29.501774   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:32.064641   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:37.186857   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:39:47.428460   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-827198 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-827198 --driver=kvm2  --container-runtime=crio: (40.419996516s)
--- PASS: TestErrorSpam/setup (40.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (4.74s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 stop: (2.293470775s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 stop: (1.440402894s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-827198 --log_dir /tmp/nospam-827198 stop: (1.005469613s)
--- PASS: TestErrorSpam/stop (4.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20319-18835/.minikube/files/etc/test/nested/copy/26072/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787474 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0127 10:40:07.910507   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:40:48.873421   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-787474 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.403970977s)
--- PASS: TestFunctional/serial/StartWithProxy (52.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 10:40:59.888612   26072 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787474 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-787474 --alsologtostderr -v=8: (41.568334776s)
functional_test.go:663: soft start took 41.568998616s for "functional-787474" cluster.
I0127 10:41:41.457231   26072 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (41.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-787474 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 cache add registry.k8s.io/pause:3.1: (1.103789095s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 cache add registry.k8s.io/pause:3.3: (1.12673201s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 cache add registry.k8s.io/pause:latest: (1.110690197s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-787474 /tmp/TestFunctionalserialCacheCmdcacheadd_local1884782562/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cache add minikube-local-cache-test:functional-787474
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 cache add minikube-local-cache-test:functional-787474: (1.677247216s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cache delete minikube-local-cache-test:functional-787474
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-787474
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.759759ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 cache reload: (1.012402916s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 kubectl -- --context functional-787474 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-787474 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787474 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 10:42:10.795256   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-787474 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.427098459s)
functional_test.go:761: restart took 38.427210866s for "functional-787474" cluster.
I0127 10:42:27.656532   26072 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (38.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-787474 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 logs: (1.335975224s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 logs --file /tmp/TestFunctionalserialLogsFileCmd3453688626/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 logs --file /tmp/TestFunctionalserialLogsFileCmd3453688626/001/logs.txt: (1.414063189s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-787474 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-787474
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-787474: exit status 115 (263.795351ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.59:32767 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-787474 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 config get cpus: exit status 14 (61.711621ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 config get cpus: exit status 14 (45.501866ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-787474 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-787474 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 34174: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787474 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-787474 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.379967ms)

                                                
                                                
-- stdout --
	* [functional-787474] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 10:42:46.833040   33685 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:42:46.833144   33685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:42:46.833152   33685 out.go:358] Setting ErrFile to fd 2...
	I0127 10:42:46.833157   33685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:42:46.833343   33685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 10:42:46.833833   33685 out.go:352] Setting JSON to false
	I0127 10:42:46.834718   33685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5067,"bootTime":1737969500,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:42:46.834807   33685 start.go:139] virtualization: kvm guest
	I0127 10:42:46.836831   33685 out.go:177] * [functional-787474] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 10:42:46.838010   33685 notify.go:220] Checking for updates...
	I0127 10:42:46.838015   33685 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 10:42:46.839398   33685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:42:46.840604   33685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 10:42:46.841846   33685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:42:46.843063   33685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 10:42:46.844300   33685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 10:42:46.845713   33685 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 10:42:46.846082   33685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:42:46.846133   33685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:42:46.861326   33685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0127 10:42:46.861811   33685 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:42:46.862354   33685 main.go:141] libmachine: Using API Version  1
	I0127 10:42:46.862388   33685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:42:46.862754   33685 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:42:46.862932   33685 main.go:141] libmachine: (functional-787474) Calling .DriverName
	I0127 10:42:46.863179   33685 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:42:46.863483   33685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:42:46.863526   33685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:42:46.878434   33685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0127 10:42:46.878871   33685 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:42:46.879497   33685 main.go:141] libmachine: Using API Version  1
	I0127 10:42:46.879539   33685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:42:46.879834   33685 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:42:46.880014   33685 main.go:141] libmachine: (functional-787474) Calling .DriverName
	I0127 10:42:46.912977   33685 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 10:42:46.914133   33685 start.go:297] selected driver: kvm2
	I0127 10:42:46.914150   33685 start.go:901] validating driver "kvm2" against &{Name:functional-787474 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-787474 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:42:46.914268   33685 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 10:42:46.916884   33685 out.go:201] 
	W0127 10:42:46.918065   33685 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 10:42:46.919231   33685 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787474 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787474 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-787474 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (541.900235ms)

                                                
                                                
-- stdout --
	* [functional-787474] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 10:42:47.151371   33757 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:42:47.151504   33757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:42:47.151512   33757 out.go:358] Setting ErrFile to fd 2...
	I0127 10:42:47.151519   33757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:42:47.151908   33757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 10:42:47.152476   33757 out.go:352] Setting JSON to false
	I0127 10:42:47.153490   33757 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5067,"bootTime":1737969500,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 10:42:47.153564   33757 start.go:139] virtualization: kvm guest
	I0127 10:42:47.155723   33757 out.go:177] * [functional-787474] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 10:42:47.157300   33757 notify.go:220] Checking for updates...
	I0127 10:42:47.157320   33757 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 10:42:47.158584   33757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 10:42:47.159897   33757 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 10:42:47.161245   33757 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 10:42:47.162431   33757 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 10:42:47.163548   33757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 10:42:47.166450   33757 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 10:42:47.167067   33757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:42:47.167132   33757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:42:47.182666   33757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0127 10:42:47.183152   33757 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:42:47.183907   33757 main.go:141] libmachine: Using API Version  1
	I0127 10:42:47.183923   33757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:42:47.184300   33757 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:42:47.184503   33757 main.go:141] libmachine: (functional-787474) Calling .DriverName
	I0127 10:42:47.184803   33757 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:42:47.185207   33757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:42:47.185251   33757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:42:47.201261   33757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I0127 10:42:47.201768   33757 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:42:47.202314   33757 main.go:141] libmachine: Using API Version  1
	I0127 10:42:47.202339   33757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:42:47.202686   33757 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:42:47.202862   33757 main.go:141] libmachine: (functional-787474) Calling .DriverName
	I0127 10:42:47.324628   33757 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 10:42:47.441477   33757 start.go:297] selected driver: kvm2
	I0127 10:42:47.441508   33757 start.go:901] validating driver "kvm2" against &{Name:functional-787474 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-787474 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:42:47.441623   33757 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 10:42:47.491499   33757 out.go:201] 
	W0127 10:42:47.521683   33757 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 10:42:47.616903   33757 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-787474 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-787474 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-7s4z9" [3bae15b9-d268-43e5-a33c-2c92753d87c9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-7s4z9" [3bae15b9-d268-43e5-a33c-2c92753d87c9] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004678479s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.59:32204
functional_test.go:1675: http://192.168.50.59:32204: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-7s4z9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.59:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.59:32204
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7f4b5ba7-04a6-4d7a-bbd9-4989302cec94] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0044821s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-787474 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-787474 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-787474 get pvc myclaim -o=json
I0127 10:42:40.818127   26072 retry.go:31] will retry after 2.112366111s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f9a1893c-1197-44b2-ad07-986f9d535dc2 ResourceVersion:691 Generation:0 CreationTimestamp:2025-01-27 10:42:40 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019c85c0 VolumeMode:0xc0019c85d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-787474 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-787474 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b3fb7cef-797f-46d5-9d0f-acff8171265f] Pending
helpers_test.go:344: "sp-pod" [b3fb7cef-797f-46d5-9d0f-acff8171265f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b3fb7cef-797f-46d5-9d0f-acff8171265f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00431461s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-787474 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-787474 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-787474 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b40897d7-3ac5-4919-a948-eeb13ba8f42e] Pending
helpers_test.go:344: "sp-pod" [b40897d7-3ac5-4919-a948-eeb13ba8f42e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b40897d7-3ac5-4919-a948-eeb13ba8f42e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003638171s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-787474 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh -n functional-787474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cp functional-787474:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2710851991/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh -n functional-787474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh -n functional-787474 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-787474 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-xgmfm" [85c8f23f-4381-4f37-8720-4e2c199a9302] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-xgmfm" [85c8f23f-4381-4f37-8720-4e2c199a9302] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003172687s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-787474 exec mysql-58ccfd96bb-xgmfm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-787474 exec mysql-58ccfd96bb-xgmfm -- mysql -ppassword -e "show databases;": exit status 1 (128.556195ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 10:43:21.887598   26072 retry.go:31] will retry after 835.126654ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-787474 exec mysql-58ccfd96bb-xgmfm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/26072/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /etc/test/nested/copy/26072/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/26072.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /etc/ssl/certs/26072.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/26072.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /usr/share/ca-certificates/26072.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/260722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /etc/ssl/certs/260722.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/260722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /usr/share/ca-certificates/260722.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-787474 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh "sudo systemctl is-active docker": exit status 1 (247.889741ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh "sudo systemctl is-active containerd": exit status 1 (212.609569ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-787474 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-787474 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-lzlcw" [84da8b92-b1ef-4982-afe4-1b2dbd2091b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-lzlcw" [84da8b92-b1ef-4982-afe4-1b2dbd2091b6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004905207s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-787474 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-787474 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-787474 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-787474 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 32547: os: process already finished
helpers_test.go:502: unable to terminate pid 32559: os: process already finished
helpers_test.go:502: unable to terminate pid 32588: os: process already finished
helpers_test.go:508: unable to kill pid 32518: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-787474 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-787474 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [26ace26d-577e-4c16-b369-c5de144b96dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [26ace26d-577e-4c16-b369-c5de144b96dd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.004683491s
I0127 10:42:47.954117   26072 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 image ls --format short --alsologtostderr: (1.160884308s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787474 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-787474
localhost/kicbase/echo-server:functional-787474
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787474 image ls --format short --alsologtostderr:
I0127 10:43:01.068567   34811 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:01.068663   34811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:01.068668   34811 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:01.068673   34811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:01.068848   34811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
I0127 10:43:01.069381   34811 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:01.069473   34811 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:01.069810   34811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:01.069863   34811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:01.084646   34811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
I0127 10:43:01.085122   34811 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:01.085767   34811 main.go:141] libmachine: Using API Version  1
I0127 10:43:01.085794   34811 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:01.086131   34811 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:01.086317   34811 main.go:141] libmachine: (functional-787474) Calling .GetState
I0127 10:43:01.088113   34811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:01.088152   34811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:01.102333   34811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35559
I0127 10:43:01.102700   34811 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:01.103131   34811 main.go:141] libmachine: Using API Version  1
I0127 10:43:01.103150   34811 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:01.103510   34811 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:01.103730   34811 main.go:141] libmachine: (functional-787474) Calling .DriverName
I0127 10:43:01.103907   34811 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:01.103933   34811 main.go:141] libmachine: (functional-787474) Calling .GetSSHHostname
I0127 10:43:01.106682   34811 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:01.106995   34811 main.go:141] libmachine: (functional-787474) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:f3", ip: ""} in network mk-functional-787474: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:21 +0000 UTC Type:0 Mac:52:54:00:84:e8:f3 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:functional-787474 Clientid:01:52:54:00:84:e8:f3}
I0127 10:43:01.107033   34811 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined IP address 192.168.50.59 and MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:01.107190   34811 main.go:141] libmachine: (functional-787474) Calling .GetSSHPort
I0127 10:43:01.107378   34811 main.go:141] libmachine: (functional-787474) Calling .GetSSHKeyPath
I0127 10:43:01.107520   34811 main.go:141] libmachine: (functional-787474) Calling .GetSSHUsername
I0127 10:43:01.107682   34811 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/functional-787474/id_rsa Username:docker}
I0127 10:43:01.244620   34811 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:02.181077   34811 main.go:141] libmachine: Making call to close driver server
I0127 10:43:02.181106   34811 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:02.181387   34811 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:02.181403   34811 main.go:141] libmachine: (functional-787474) DBG | Closing plugin on server side
I0127 10:43:02.181408   34811 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:02.181444   34811 main.go:141] libmachine: Making call to close driver server
I0127 10:43:02.181453   34811 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:02.181678   34811 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:02.181712   34811 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:02.181692   34811 main.go:141] libmachine: (functional-787474) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787474 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-787474  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | alpine             | 93f9c72967dbc | 48.5MB |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| localhost/minikube-local-cache-test     | functional-787474  | 02d713942b390 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-787474  | 5864ce524dc80 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787474 image ls --format table --alsologtostderr:
I0127 10:43:07.076780   34990 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:07.076891   34990 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:07.076902   34990 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:07.076909   34990 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:07.077188   34990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
I0127 10:43:07.078007   34990 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:07.078161   34990 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:07.078725   34990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:07.078807   34990 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:07.095388   34990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
I0127 10:43:07.095924   34990 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:07.096531   34990 main.go:141] libmachine: Using API Version  1
I0127 10:43:07.096557   34990 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:07.096911   34990 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:07.097129   34990 main.go:141] libmachine: (functional-787474) Calling .GetState
I0127 10:43:07.098907   34990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:07.098952   34990 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:07.115128   34990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
I0127 10:43:07.115559   34990 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:07.116171   34990 main.go:141] libmachine: Using API Version  1
I0127 10:43:07.116197   34990 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:07.116532   34990 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:07.116741   34990 main.go:141] libmachine: (functional-787474) Calling .DriverName
I0127 10:43:07.116912   34990 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:07.116935   34990 main.go:141] libmachine: (functional-787474) Calling .GetSSHHostname
I0127 10:43:07.119881   34990 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:07.120284   34990 main.go:141] libmachine: (functional-787474) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:f3", ip: ""} in network mk-functional-787474: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:21 +0000 UTC Type:0 Mac:52:54:00:84:e8:f3 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:functional-787474 Clientid:01:52:54:00:84:e8:f3}
I0127 10:43:07.120314   34990 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined IP address 192.168.50.59 and MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:07.120459   34990 main.go:141] libmachine: (functional-787474) Calling .GetSSHPort
I0127 10:43:07.120646   34990 main.go:141] libmachine: (functional-787474) Calling .GetSSHKeyPath
I0127 10:43:07.120779   34990 main.go:141] libmachine: (functional-787474) Calling .GetSSHUsername
I0127 10:43:07.120916   34990 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/functional-787474/id_rsa Username:docker}
I0127 10:43:07.220666   34990 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:07.266072   34990 main.go:141] libmachine: Making call to close driver server
I0127 10:43:07.266099   34990 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:07.266393   34990 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:07.266411   34990 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:07.266425   34990 main.go:141] libmachine: Making call to close driver server
I0127 10:43:07.266433   34990 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:07.266640   34990 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:07.266653   34990 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787474 image ls --format json --alsologtostderr:
[{"id":"5864ce524dc800253a0778b1da6668bd765e27c31e25e2a40e70b8a5529de95d","repoDigests":["localhost/my-image@sha256:82ab8ffc90aa6fc3385d5ec6c52e15294ebf0edbd51a3fa5aed3509d0ba7d58e"],"repoTags":["localhost/my-image:functional-787474"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906
d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","doc
ker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-787474"],"size":"4943877"},{"id":"02d713942b390306699dd4e67f70cabf6d45d78e819de27ef0edb94101e4c1ac","repoDigests":["localhost/minikube-local-cache-test@sha256:e4e788a6152f1ccda81d6223f85c644860e1fd337df0a024531930b7cbef1729"],"repoTags":["l
ocalhost/minikube-local-cache-test:functional-787474"],"size":"3330"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"9496
3761"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62
263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3
029"],"repoTags":[],"size":"249229937"},{"id":"15b4cdd2bdda39ab72bc3634c6c418400512a9981d49cefd6426999b56b64760","repoDigests":["docker.io/library/abb4ef3de6329bc165bcc2c22ae88092f2acf4805719bd4b704252216dac1e9e-tmp@sha256:ec17612294505c8858a9f3a39407750761aabf5866d8df98fd99238b698706ea"],"repoTags":[],"size":"1466018"},{"id":"93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3","repoDigests":["docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901","docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48461780"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{
"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787474 image ls --format json --alsologtostderr:
I0127 10:43:06.380586   34952 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:06.380904   34952 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:06.380926   34952 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:06.380943   34952 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:06.381261   34952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
I0127 10:43:06.382057   34952 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:06.382223   34952 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:06.382746   34952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:06.382831   34952 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:06.397693   34952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
I0127 10:43:06.398095   34952 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:06.398615   34952 main.go:141] libmachine: Using API Version  1
I0127 10:43:06.398632   34952 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:06.399001   34952 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:06.399207   34952 main.go:141] libmachine: (functional-787474) Calling .GetState
I0127 10:43:06.401155   34952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:06.401205   34952 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:06.415171   34952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
I0127 10:43:06.415564   34952 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:06.416134   34952 main.go:141] libmachine: Using API Version  1
I0127 10:43:06.416158   34952 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:06.416487   34952 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:06.416681   34952 main.go:141] libmachine: (functional-787474) Calling .DriverName
I0127 10:43:06.416839   34952 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:06.416863   34952 main.go:141] libmachine: (functional-787474) Calling .GetSSHHostname
I0127 10:43:06.419770   34952 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:06.420307   34952 main.go:141] libmachine: (functional-787474) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:f3", ip: ""} in network mk-functional-787474: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:21 +0000 UTC Type:0 Mac:52:54:00:84:e8:f3 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:functional-787474 Clientid:01:52:54:00:84:e8:f3}
I0127 10:43:06.420337   34952 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined IP address 192.168.50.59 and MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:06.420521   34952 main.go:141] libmachine: (functional-787474) Calling .GetSSHPort
I0127 10:43:06.420698   34952 main.go:141] libmachine: (functional-787474) Calling .GetSSHKeyPath
I0127 10:43:06.420803   34952 main.go:141] libmachine: (functional-787474) Calling .GetSSHUsername
I0127 10:43:06.420904   34952 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/functional-787474/id_rsa Username:docker}
I0127 10:43:06.546944   34952 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:06.899785   34952 main.go:141] libmachine: Making call to close driver server
I0127 10:43:06.899802   34952 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:06.900070   34952 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:06.900089   34952 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:06.900097   34952 main.go:141] libmachine: Making call to close driver server
I0127 10:43:06.900096   34952 main.go:141] libmachine: (functional-787474) DBG | Closing plugin on server side
I0127 10:43:06.900106   34952 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:06.900320   34952 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:06.900346   34952 main.go:141] libmachine: (functional-787474) DBG | Closing plugin on server side
I0127 10:43:06.900349   34952 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787474 image ls --format yaml --alsologtostderr:
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3
repoDigests:
- docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "48461780"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 02d713942b390306699dd4e67f70cabf6d45d78e819de27ef0edb94101e4c1ac
repoDigests:
- localhost/minikube-local-cache-test@sha256:e4e788a6152f1ccda81d6223f85c644860e1fd337df0a024531930b7cbef1729
repoTags:
- localhost/minikube-local-cache-test:functional-787474
size: "3330"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-787474
size: "4943877"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787474 image ls --format yaml --alsologtostderr:
I0127 10:43:02.232028   34851 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:02.232159   34851 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:02.232169   34851 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:02.232173   34851 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:02.232387   34851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
I0127 10:43:02.233044   34851 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:02.233173   34851 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:02.233565   34851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:02.233612   34851 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:02.248917   34851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
I0127 10:43:02.249435   34851 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:02.250050   34851 main.go:141] libmachine: Using API Version  1
I0127 10:43:02.250083   34851 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:02.250469   34851 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:02.250721   34851 main.go:141] libmachine: (functional-787474) Calling .GetState
I0127 10:43:02.252570   34851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:02.252626   34851 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:02.267282   34851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
I0127 10:43:02.267763   34851 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:02.268239   34851 main.go:141] libmachine: Using API Version  1
I0127 10:43:02.268259   34851 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:02.268641   34851 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:02.268835   34851 main.go:141] libmachine: (functional-787474) Calling .DriverName
I0127 10:43:02.269031   34851 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:02.269052   34851 main.go:141] libmachine: (functional-787474) Calling .GetSSHHostname
I0127 10:43:02.272279   34851 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:02.272784   34851 main.go:141] libmachine: (functional-787474) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:f3", ip: ""} in network mk-functional-787474: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:21 +0000 UTC Type:0 Mac:52:54:00:84:e8:f3 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:functional-787474 Clientid:01:52:54:00:84:e8:f3}
I0127 10:43:02.272811   34851 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined IP address 192.168.50.59 and MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:02.272997   34851 main.go:141] libmachine: (functional-787474) Calling .GetSSHPort
I0127 10:43:02.273172   34851 main.go:141] libmachine: (functional-787474) Calling .GetSSHKeyPath
I0127 10:43:02.273352   34851 main.go:141] libmachine: (functional-787474) Calling .GetSSHUsername
I0127 10:43:02.273497   34851 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/functional-787474/id_rsa Username:docker}
I0127 10:43:02.370715   34851 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 10:43:02.421259   34851 main.go:141] libmachine: Making call to close driver server
I0127 10:43:02.421274   34851 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:02.421534   34851 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:02.421564   34851 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:02.421577   34851 main.go:141] libmachine: (functional-787474) DBG | Closing plugin on server side
I0127 10:43:02.421587   34851 main.go:141] libmachine: Making call to close driver server
I0127 10:43:02.421597   34851 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:02.421832   34851 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:02.421847   34851 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh pgrep buildkitd: exit status 1 (191.539479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image build -t localhost/my-image:functional-787474 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 image build -t localhost/my-image:functional-787474 testdata/build --alsologtostderr: (3.303473494s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787474 image build -t localhost/my-image:functional-787474 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 15b4cdd2bdd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-787474
--> 5864ce524dc
Successfully tagged localhost/my-image:functional-787474
5864ce524dc800253a0778b1da6668bd765e27c31e25e2a40e70b8a5529de95d
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787474 image build -t localhost/my-image:functional-787474 testdata/build --alsologtostderr:
I0127 10:43:02.660034   34904 out.go:345] Setting OutFile to fd 1 ...
I0127 10:43:02.660277   34904 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:02.660286   34904 out.go:358] Setting ErrFile to fd 2...
I0127 10:43:02.660290   34904 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 10:43:02.660452   34904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
I0127 10:43:02.661000   34904 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:02.661518   34904 config.go:182] Loaded profile config "functional-787474": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 10:43:02.661875   34904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:02.661910   34904 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:02.676500   34904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
I0127 10:43:02.676942   34904 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:02.677518   34904 main.go:141] libmachine: Using API Version  1
I0127 10:43:02.677544   34904 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:02.677836   34904 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:02.678004   34904 main.go:141] libmachine: (functional-787474) Calling .GetState
I0127 10:43:02.679750   34904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 10:43:02.679787   34904 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 10:43:02.694235   34904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39775
I0127 10:43:02.694671   34904 main.go:141] libmachine: () Calling .GetVersion
I0127 10:43:02.695082   34904 main.go:141] libmachine: Using API Version  1
I0127 10:43:02.695113   34904 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 10:43:02.695472   34904 main.go:141] libmachine: () Calling .GetMachineName
I0127 10:43:02.695662   34904 main.go:141] libmachine: (functional-787474) Calling .DriverName
I0127 10:43:02.695864   34904 ssh_runner.go:195] Run: systemctl --version
I0127 10:43:02.695887   34904 main.go:141] libmachine: (functional-787474) Calling .GetSSHHostname
I0127 10:43:02.698674   34904 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:02.699069   34904 main.go:141] libmachine: (functional-787474) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:e8:f3", ip: ""} in network mk-functional-787474: {Iface:virbr1 ExpiryTime:2025-01-27 11:40:21 +0000 UTC Type:0 Mac:52:54:00:84:e8:f3 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:functional-787474 Clientid:01:52:54:00:84:e8:f3}
I0127 10:43:02.699094   34904 main.go:141] libmachine: (functional-787474) DBG | domain functional-787474 has defined IP address 192.168.50.59 and MAC address 52:54:00:84:e8:f3 in network mk-functional-787474
I0127 10:43:02.699204   34904 main.go:141] libmachine: (functional-787474) Calling .GetSSHPort
I0127 10:43:02.699354   34904 main.go:141] libmachine: (functional-787474) Calling .GetSSHKeyPath
I0127 10:43:02.699467   34904 main.go:141] libmachine: (functional-787474) Calling .GetSSHUsername
I0127 10:43:02.699571   34904 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/functional-787474/id_rsa Username:docker}
I0127 10:43:02.781704   34904 build_images.go:161] Building image from path: /tmp/build.3465242262.tar
I0127 10:43:02.781759   34904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 10:43:02.791819   34904 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3465242262.tar
I0127 10:43:02.796109   34904 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3465242262.tar: stat -c "%s %y" /var/lib/minikube/build/build.3465242262.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3465242262.tar': No such file or directory
I0127 10:43:02.796133   34904 ssh_runner.go:362] scp /tmp/build.3465242262.tar --> /var/lib/minikube/build/build.3465242262.tar (3072 bytes)
I0127 10:43:02.824978   34904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3465242262
I0127 10:43:02.834144   34904 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3465242262 -xf /var/lib/minikube/build/build.3465242262.tar
I0127 10:43:02.842810   34904 crio.go:315] Building image: /var/lib/minikube/build/build.3465242262
I0127 10:43:02.842852   34904 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-787474 /var/lib/minikube/build/build.3465242262 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0127 10:43:05.861681   34904 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-787474 /var/lib/minikube/build/build.3465242262 --cgroup-manager=cgroupfs: (3.018796794s)
I0127 10:43:05.861764   34904 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3465242262
I0127 10:43:05.887495   34904 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3465242262.tar
I0127 10:43:05.914559   34904 build_images.go:217] Built localhost/my-image:functional-787474 from /tmp/build.3465242262.tar
I0127 10:43:05.914597   34904 build_images.go:133] succeeded building to: functional-787474
I0127 10:43:05.914603   34904 build_images.go:134] failed building to: 
I0127 10:43:05.914630   34904 main.go:141] libmachine: Making call to close driver server
I0127 10:43:05.914646   34904 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:05.914977   34904 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:05.914985   34904 main.go:141] libmachine: (functional-787474) DBG | Closing plugin on server side
I0127 10:43:05.914998   34904 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 10:43:05.915009   34904 main.go:141] libmachine: Making call to close driver server
I0127 10:43:05.915017   34904 main.go:141] libmachine: (functional-787474) Calling .Close
I0127 10:43:05.915246   34904 main.go:141] libmachine: Successfully made call to close driver server
I0127 10:43:05.915268   34904 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.500103625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-787474
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image load --daemon kicbase/echo-server:functional-787474 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-787474 image load --daemon kicbase/echo-server:functional-787474 --alsologtostderr: (1.081031594s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image load --daemon kicbase/echo-server:functional-787474 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-787474
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image load --daemon kicbase/echo-server:functional-787474 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image save kicbase/echo-server:functional-787474 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image rm kicbase/echo-server:functional-787474 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-787474
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 image save --daemon kicbase/echo-server:functional-787474 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-787474
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 service list -o json
functional_test.go:1494: Took "252.60417ms" to run "out/minikube-linux-amd64 -p functional-787474 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.59:31607
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "434.346878ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "72.728502ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.59:31607
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdany-port2543191172/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737974566287059792" to /tmp/TestFunctionalparallelMountCmdany-port2543191172/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737974566287059792" to /tmp/TestFunctionalparallelMountCmdany-port2543191172/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737974566287059792" to /tmp/TestFunctionalparallelMountCmdany-port2543191172/001/test-1737974566287059792
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.541084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 10:42:46.557916   26072 retry.go:31] will retry after 542.706407ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 10:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 10:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 10:42 test-1737974566287059792
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh cat /mount-9p/test-1737974566287059792
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-787474 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e1cd4324-6d83-4af5-8701-eb7c6352152a] Pending
helpers_test.go:344: "busybox-mount" [e1cd4324-6d83-4af5-8701-eb7c6352152a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e1cd4324-6d83-4af5-8701-eb7c6352152a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e1cd4324-6d83-4af5-8701-eb7c6352152a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.002956701s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-787474 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdany-port2543191172/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "399.936864ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.542891ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-787474 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.167.40 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-787474 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 update-context --alsologtostderr -v=2
2025/01/27 10:43:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdspecific-port2350474854/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.539972ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 10:42:57.171837   26072 retry.go:31] will retry after 289.554247ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdspecific-port2350474854/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh "sudo umount -f /mount-9p": exit status 1 (243.787416ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-787474 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdspecific-port2350474854/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup584077341/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup584077341/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup584077341/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T" /mount1: exit status 1 (278.043646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 10:42:58.848808   26072 retry.go:31] will retry after 627.510781ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787474 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-787474 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup584077341/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup584077341/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup584077341/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-787474
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-787474
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-787474
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-669997 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 10:44:26.924881   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:44:54.637053   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-669997 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.758583585s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-669997 -- rollout status deployment/busybox: (3.919424493s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-76g4c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-b2vwd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-zrqhb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-76g4c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-b2vwd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-zrqhb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-76g4c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-b2vwd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-zrqhb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-76g4c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-76g4c -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-b2vwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-b2vwd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-zrqhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-669997 -- exec busybox-58667487b6-zrqhb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-669997 -v=7 --alsologtostderr
E0127 10:47:34.555550   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:34.561920   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:34.573269   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:34.594698   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:34.636057   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:34.717486   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:34.879038   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:35.200792   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:35.842663   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:37.124399   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:47:39.685796   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-669997 -v=7 --alsologtostderr: (56.427223392s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-669997 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp testdata/cp-test.txt ha-669997:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4105671585/001/cp-test_ha-669997.txt
E0127 10:47:44.807623   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997:/home/docker/cp-test.txt ha-669997-m02:/home/docker/cp-test_ha-669997_ha-669997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test_ha-669997_ha-669997-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997:/home/docker/cp-test.txt ha-669997-m03:/home/docker/cp-test_ha-669997_ha-669997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test_ha-669997_ha-669997-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997:/home/docker/cp-test.txt ha-669997-m04:/home/docker/cp-test_ha-669997_ha-669997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test_ha-669997_ha-669997-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp testdata/cp-test.txt ha-669997-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4105671585/001/cp-test_ha-669997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m02:/home/docker/cp-test.txt ha-669997:/home/docker/cp-test_ha-669997-m02_ha-669997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test_ha-669997-m02_ha-669997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m02:/home/docker/cp-test.txt ha-669997-m03:/home/docker/cp-test_ha-669997-m02_ha-669997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test_ha-669997-m02_ha-669997-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m02:/home/docker/cp-test.txt ha-669997-m04:/home/docker/cp-test_ha-669997-m02_ha-669997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test_ha-669997-m02_ha-669997-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp testdata/cp-test.txt ha-669997-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4105671585/001/cp-test_ha-669997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m03:/home/docker/cp-test.txt ha-669997:/home/docker/cp-test_ha-669997-m03_ha-669997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test_ha-669997-m03_ha-669997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m03:/home/docker/cp-test.txt ha-669997-m02:/home/docker/cp-test_ha-669997-m03_ha-669997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test_ha-669997-m03_ha-669997-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m03:/home/docker/cp-test.txt ha-669997-m04:/home/docker/cp-test_ha-669997-m03_ha-669997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test_ha-669997-m03_ha-669997-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp testdata/cp-test.txt ha-669997-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4105671585/001/cp-test_ha-669997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m04:/home/docker/cp-test.txt ha-669997:/home/docker/cp-test_ha-669997-m04_ha-669997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997 "sudo cat /home/docker/cp-test_ha-669997-m04_ha-669997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m04:/home/docker/cp-test.txt ha-669997-m02:/home/docker/cp-test_ha-669997-m04_ha-669997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test.txt"
E0127 10:47:55.048957   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m02 "sudo cat /home/docker/cp-test_ha-669997-m04_ha-669997-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 cp ha-669997-m04:/home/docker/cp-test.txt ha-669997-m03:/home/docker/cp-test_ha-669997-m04_ha-669997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 ssh -n ha-669997-m03 "sudo cat /home/docker/cp-test_ha-669997-m04_ha-669997-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 node stop m02 -v=7 --alsologtostderr
E0127 10:48:15.531156   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:48:56.492682   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:49:26.924591   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-669997 node stop m02 -v=7 --alsologtostderr: (1m30.982698893s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr: exit status 7 (597.676258ms)

                                                
                                                
-- stdout --
	ha-669997
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-669997-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-669997-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-669997-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 10:49:27.024742   39733 out.go:345] Setting OutFile to fd 1 ...
	I0127 10:49:27.024848   39733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:49:27.024856   39733 out.go:358] Setting ErrFile to fd 2...
	I0127 10:49:27.024861   39733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:49:27.025057   39733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 10:49:27.025228   39733 out.go:352] Setting JSON to false
	I0127 10:49:27.025257   39733 mustload.go:65] Loading cluster: ha-669997
	I0127 10:49:27.025366   39733 notify.go:220] Checking for updates...
	I0127 10:49:27.025697   39733 config.go:182] Loaded profile config "ha-669997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 10:49:27.025722   39733 status.go:174] checking status of ha-669997 ...
	I0127 10:49:27.026199   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.026248   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.040845   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0127 10:49:27.041363   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.042008   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.042040   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.042497   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.042681   39733 main.go:141] libmachine: (ha-669997) Calling .GetState
	I0127 10:49:27.044290   39733 status.go:371] ha-669997 host status = "Running" (err=<nil>)
	I0127 10:49:27.044307   39733 host.go:66] Checking if "ha-669997" exists ...
	I0127 10:49:27.044582   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.044621   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.059814   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0127 10:49:27.060334   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.060875   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.060897   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.061268   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.061459   39733 main.go:141] libmachine: (ha-669997) Calling .GetIP
	I0127 10:49:27.065083   39733 main.go:141] libmachine: (ha-669997) DBG | domain ha-669997 has defined MAC address 52:54:00:18:99:a5 in network mk-ha-669997
	I0127 10:49:27.065644   39733 main.go:141] libmachine: (ha-669997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:99:a5", ip: ""} in network mk-ha-669997: {Iface:virbr1 ExpiryTime:2025-01-27 11:43:38 +0000 UTC Type:0 Mac:52:54:00:18:99:a5 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-669997 Clientid:01:52:54:00:18:99:a5}
	I0127 10:49:27.065673   39733 main.go:141] libmachine: (ha-669997) DBG | domain ha-669997 has defined IP address 192.168.39.40 and MAC address 52:54:00:18:99:a5 in network mk-ha-669997
	I0127 10:49:27.065828   39733 host.go:66] Checking if "ha-669997" exists ...
	I0127 10:49:27.066261   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.066313   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.081916   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45317
	I0127 10:49:27.082446   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.082960   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.082982   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.083338   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.083538   39733 main.go:141] libmachine: (ha-669997) Calling .DriverName
	I0127 10:49:27.083749   39733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 10:49:27.083776   39733 main.go:141] libmachine: (ha-669997) Calling .GetSSHHostname
	I0127 10:49:27.086825   39733 main.go:141] libmachine: (ha-669997) DBG | domain ha-669997 has defined MAC address 52:54:00:18:99:a5 in network mk-ha-669997
	I0127 10:49:27.087254   39733 main.go:141] libmachine: (ha-669997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:99:a5", ip: ""} in network mk-ha-669997: {Iface:virbr1 ExpiryTime:2025-01-27 11:43:38 +0000 UTC Type:0 Mac:52:54:00:18:99:a5 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-669997 Clientid:01:52:54:00:18:99:a5}
	I0127 10:49:27.087279   39733 main.go:141] libmachine: (ha-669997) DBG | domain ha-669997 has defined IP address 192.168.39.40 and MAC address 52:54:00:18:99:a5 in network mk-ha-669997
	I0127 10:49:27.087450   39733 main.go:141] libmachine: (ha-669997) Calling .GetSSHPort
	I0127 10:49:27.087597   39733 main.go:141] libmachine: (ha-669997) Calling .GetSSHKeyPath
	I0127 10:49:27.087752   39733 main.go:141] libmachine: (ha-669997) Calling .GetSSHUsername
	I0127 10:49:27.087878   39733 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/ha-669997/id_rsa Username:docker}
	I0127 10:49:27.167726   39733 ssh_runner.go:195] Run: systemctl --version
	I0127 10:49:27.173414   39733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:49:27.188667   39733 kubeconfig.go:125] found "ha-669997" server: "https://192.168.39.254:8443"
	I0127 10:49:27.188710   39733 api_server.go:166] Checking apiserver status ...
	I0127 10:49:27.188749   39733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 10:49:27.202337   39733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1088/cgroup
	W0127 10:49:27.212797   39733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1088/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 10:49:27.212845   39733 ssh_runner.go:195] Run: ls
	I0127 10:49:27.217196   39733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 10:49:27.221770   39733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 10:49:27.221789   39733 status.go:463] ha-669997 apiserver status = Running (err=<nil>)
	I0127 10:49:27.221797   39733 status.go:176] ha-669997 status: &{Name:ha-669997 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 10:49:27.221814   39733 status.go:174] checking status of ha-669997-m02 ...
	I0127 10:49:27.222122   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.222161   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.237345   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34677
	I0127 10:49:27.237724   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.238163   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.238184   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.238497   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.238739   39733 main.go:141] libmachine: (ha-669997-m02) Calling .GetState
	I0127 10:49:27.240228   39733 status.go:371] ha-669997-m02 host status = "Stopped" (err=<nil>)
	I0127 10:49:27.240243   39733 status.go:384] host is not running, skipping remaining checks
	I0127 10:49:27.240250   39733 status.go:176] ha-669997-m02 status: &{Name:ha-669997-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 10:49:27.240273   39733 status.go:174] checking status of ha-669997-m03 ...
	I0127 10:49:27.240580   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.240623   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.254627   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
	I0127 10:49:27.255093   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.255725   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.255750   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.256045   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.256207   39733 main.go:141] libmachine: (ha-669997-m03) Calling .GetState
	I0127 10:49:27.257687   39733 status.go:371] ha-669997-m03 host status = "Running" (err=<nil>)
	I0127 10:49:27.257712   39733 host.go:66] Checking if "ha-669997-m03" exists ...
	I0127 10:49:27.258012   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.258064   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.272582   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I0127 10:49:27.273019   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.273528   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.273551   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.273876   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.274086   39733 main.go:141] libmachine: (ha-669997-m03) Calling .GetIP
	I0127 10:49:27.277441   39733 main.go:141] libmachine: (ha-669997-m03) DBG | domain ha-669997-m03 has defined MAC address 52:54:00:f3:1f:56 in network mk-ha-669997
	I0127 10:49:27.277885   39733 main.go:141] libmachine: (ha-669997-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:56", ip: ""} in network mk-ha-669997: {Iface:virbr1 ExpiryTime:2025-01-27 11:45:39 +0000 UTC Type:0 Mac:52:54:00:f3:1f:56 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-669997-m03 Clientid:01:52:54:00:f3:1f:56}
	I0127 10:49:27.277909   39733 main.go:141] libmachine: (ha-669997-m03) DBG | domain ha-669997-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:f3:1f:56 in network mk-ha-669997
	I0127 10:49:27.278116   39733 host.go:66] Checking if "ha-669997-m03" exists ...
	I0127 10:49:27.278408   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.278445   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.293186   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0127 10:49:27.293608   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.294051   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.294079   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.294363   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.294548   39733 main.go:141] libmachine: (ha-669997-m03) Calling .DriverName
	I0127 10:49:27.294709   39733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 10:49:27.294726   39733 main.go:141] libmachine: (ha-669997-m03) Calling .GetSSHHostname
	I0127 10:49:27.297248   39733 main.go:141] libmachine: (ha-669997-m03) DBG | domain ha-669997-m03 has defined MAC address 52:54:00:f3:1f:56 in network mk-ha-669997
	I0127 10:49:27.297685   39733 main.go:141] libmachine: (ha-669997-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:56", ip: ""} in network mk-ha-669997: {Iface:virbr1 ExpiryTime:2025-01-27 11:45:39 +0000 UTC Type:0 Mac:52:54:00:f3:1f:56 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-669997-m03 Clientid:01:52:54:00:f3:1f:56}
	I0127 10:49:27.297714   39733 main.go:141] libmachine: (ha-669997-m03) DBG | domain ha-669997-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:f3:1f:56 in network mk-ha-669997
	I0127 10:49:27.297870   39733 main.go:141] libmachine: (ha-669997-m03) Calling .GetSSHPort
	I0127 10:49:27.298029   39733 main.go:141] libmachine: (ha-669997-m03) Calling .GetSSHKeyPath
	I0127 10:49:27.298180   39733 main.go:141] libmachine: (ha-669997-m03) Calling .GetSSHUsername
	I0127 10:49:27.298301   39733 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/ha-669997-m03/id_rsa Username:docker}
	I0127 10:49:27.378307   39733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:49:27.391942   39733 kubeconfig.go:125] found "ha-669997" server: "https://192.168.39.254:8443"
	I0127 10:49:27.391973   39733 api_server.go:166] Checking apiserver status ...
	I0127 10:49:27.392006   39733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 10:49:27.404908   39733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup
	W0127 10:49:27.413510   39733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 10:49:27.413570   39733 ssh_runner.go:195] Run: ls
	I0127 10:49:27.417668   39733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 10:49:27.422101   39733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 10:49:27.422120   39733 status.go:463] ha-669997-m03 apiserver status = Running (err=<nil>)
	I0127 10:49:27.422127   39733 status.go:176] ha-669997-m03 status: &{Name:ha-669997-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 10:49:27.422140   39733 status.go:174] checking status of ha-669997-m04 ...
	I0127 10:49:27.422461   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.422500   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.440015   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0127 10:49:27.440394   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.440865   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.440887   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.441192   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.441399   39733 main.go:141] libmachine: (ha-669997-m04) Calling .GetState
	I0127 10:49:27.442965   39733 status.go:371] ha-669997-m04 host status = "Running" (err=<nil>)
	I0127 10:49:27.442979   39733 host.go:66] Checking if "ha-669997-m04" exists ...
	I0127 10:49:27.443366   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.443407   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.457766   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0127 10:49:27.458244   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.458786   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.458805   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.459102   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.459254   39733 main.go:141] libmachine: (ha-669997-m04) Calling .GetIP
	I0127 10:49:27.462227   39733 main.go:141] libmachine: (ha-669997-m04) DBG | domain ha-669997-m04 has defined MAC address 52:54:00:79:85:94 in network mk-ha-669997
	I0127 10:49:27.462643   39733 main.go:141] libmachine: (ha-669997-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:85:94", ip: ""} in network mk-ha-669997: {Iface:virbr1 ExpiryTime:2025-01-27 11:47:00 +0000 UTC Type:0 Mac:52:54:00:79:85:94 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-669997-m04 Clientid:01:52:54:00:79:85:94}
	I0127 10:49:27.462670   39733 main.go:141] libmachine: (ha-669997-m04) DBG | domain ha-669997-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:85:94 in network mk-ha-669997
	I0127 10:49:27.462806   39733 host.go:66] Checking if "ha-669997-m04" exists ...
	I0127 10:49:27.463120   39733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 10:49:27.463160   39733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 10:49:27.478579   39733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0127 10:49:27.479001   39733 main.go:141] libmachine: () Calling .GetVersion
	I0127 10:49:27.479437   39733 main.go:141] libmachine: Using API Version  1
	I0127 10:49:27.479460   39733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 10:49:27.479817   39733 main.go:141] libmachine: () Calling .GetMachineName
	I0127 10:49:27.479981   39733 main.go:141] libmachine: (ha-669997-m04) Calling .DriverName
	I0127 10:49:27.480171   39733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 10:49:27.480191   39733 main.go:141] libmachine: (ha-669997-m04) Calling .GetSSHHostname
	I0127 10:49:27.483168   39733 main.go:141] libmachine: (ha-669997-m04) DBG | domain ha-669997-m04 has defined MAC address 52:54:00:79:85:94 in network mk-ha-669997
	I0127 10:49:27.483709   39733 main.go:141] libmachine: (ha-669997-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:85:94", ip: ""} in network mk-ha-669997: {Iface:virbr1 ExpiryTime:2025-01-27 11:47:00 +0000 UTC Type:0 Mac:52:54:00:79:85:94 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-669997-m04 Clientid:01:52:54:00:79:85:94}
	I0127 10:49:27.483728   39733 main.go:141] libmachine: (ha-669997-m04) DBG | domain ha-669997-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:85:94 in network mk-ha-669997
	I0127 10:49:27.483903   39733 main.go:141] libmachine: (ha-669997-m04) Calling .GetSSHPort
	I0127 10:49:27.484051   39733 main.go:141] libmachine: (ha-669997-m04) Calling .GetSSHKeyPath
	I0127 10:49:27.484177   39733 main.go:141] libmachine: (ha-669997-m04) Calling .GetSSHUsername
	I0127 10:49:27.484360   39733 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/ha-669997-m04/id_rsa Username:docker}
	I0127 10:49:27.562124   39733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 10:49:27.575815   39733 status.go:176] ha-669997-m04 status: &{Name:ha-669997-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-669997 node start m02 -v=7 --alsologtostderr: (46.957111139s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (434.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-669997 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-669997 -v=7 --alsologtostderr
E0127 10:50:18.414926   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:52:34.555949   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:53:02.256879   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 10:54:26.925100   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-669997 -v=7 --alsologtostderr: (4m33.930307126s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-669997 --wait=true -v=7 --alsologtostderr
E0127 10:55:50.001124   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-669997 --wait=true -v=7 --alsologtostderr: (2m40.152208102s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-669997
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (434.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 node delete m03 -v=7 --alsologtostderr
E0127 10:57:34.555784   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-669997 node delete m03 -v=7 --alsologtostderr: (17.148608399s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 stop -v=7 --alsologtostderr
E0127 10:59:26.924989   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-669997 stop -v=7 --alsologtostderr: (4m32.779130601s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr: exit status 7 (112.542843ms)

                                                
                                                
-- stdout --
	ha-669997
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-669997-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-669997-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:02:22.360580   44042 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:02:22.360835   44042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:02:22.360845   44042 out.go:358] Setting ErrFile to fd 2...
	I0127 11:02:22.360849   44042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:02:22.361035   44042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:02:22.361257   44042 out.go:352] Setting JSON to false
	I0127 11:02:22.361288   44042 mustload.go:65] Loading cluster: ha-669997
	I0127 11:02:22.361413   44042 notify.go:220] Checking for updates...
	I0127 11:02:22.361801   44042 config.go:182] Loaded profile config "ha-669997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:02:22.361826   44042 status.go:174] checking status of ha-669997 ...
	I0127 11:02:22.362356   44042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:02:22.362390   44042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:02:22.388115   44042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0127 11:02:22.388576   44042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:02:22.389143   44042 main.go:141] libmachine: Using API Version  1
	I0127 11:02:22.389170   44042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:02:22.389607   44042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:02:22.389869   44042 main.go:141] libmachine: (ha-669997) Calling .GetState
	I0127 11:02:22.391661   44042 status.go:371] ha-669997 host status = "Stopped" (err=<nil>)
	I0127 11:02:22.391677   44042 status.go:384] host is not running, skipping remaining checks
	I0127 11:02:22.391683   44042 status.go:176] ha-669997 status: &{Name:ha-669997 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:02:22.391714   44042 status.go:174] checking status of ha-669997-m02 ...
	I0127 11:02:22.392013   44042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:02:22.392070   44042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:02:22.406164   44042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33271
	I0127 11:02:22.406531   44042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:02:22.407006   44042 main.go:141] libmachine: Using API Version  1
	I0127 11:02:22.407031   44042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:02:22.407391   44042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:02:22.407573   44042 main.go:141] libmachine: (ha-669997-m02) Calling .GetState
	I0127 11:02:22.409104   44042 status.go:371] ha-669997-m02 host status = "Stopped" (err=<nil>)
	I0127 11:02:22.409116   44042 status.go:384] host is not running, skipping remaining checks
	I0127 11:02:22.409121   44042 status.go:176] ha-669997-m02 status: &{Name:ha-669997-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:02:22.409134   44042 status.go:174] checking status of ha-669997-m04 ...
	I0127 11:02:22.409415   44042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:02:22.409447   44042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:02:22.424300   44042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I0127 11:02:22.424631   44042 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:02:22.425091   44042 main.go:141] libmachine: Using API Version  1
	I0127 11:02:22.425114   44042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:02:22.425466   44042 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:02:22.425704   44042 main.go:141] libmachine: (ha-669997-m04) Calling .GetState
	I0127 11:02:22.427435   44042 status.go:371] ha-669997-m04 host status = "Stopped" (err=<nil>)
	I0127 11:02:22.427447   44042 status.go:384] host is not running, skipping remaining checks
	I0127 11:02:22.427452   44042 status.go:176] ha-669997-m04 status: &{Name:ha-669997-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-669997 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 11:02:34.555731   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:03:57.618486   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-669997 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.472364318s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-669997 --control-plane -v=7 --alsologtostderr
E0127 11:04:26.925100   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-669997 --control-plane -v=7 --alsologtostderr: (1m15.390435835s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-669997 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-564544 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-564544 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.74261522s)
--- PASS: TestJSONOutput/start/Command (78.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-564544 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-564544 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-564544 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-564544 --output=json --user=testUser: (7.329143812s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-528118 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-528118 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.529539ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4073c71-af5f-46f3-a5b9-829b72328262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-528118] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ead2082c-9f66-4530-87ec-b248b82dbb7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20319"}}
	{"specversion":"1.0","id":"046fb0e7-cecb-414a-8433-a8b12193de4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d93980d-26a5-42b2-a5d9-787ee9a4d077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig"}}
	{"specversion":"1.0","id":"2c8ade2d-e4ab-47e4-891e-d1e6ee092ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube"}}
	{"specversion":"1.0","id":"d28938f1-c773-4cfb-be4a-43bbcc039033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4571c7a4-26fb-437c-b984-2c24abef95b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e7ac5d4e-485a-42c6-98af-a41d6c0b3e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-528118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-528118
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (83.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-894022 --driver=kvm2  --container-runtime=crio
E0127 11:07:34.555797   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-894022 --driver=kvm2  --container-runtime=crio: (38.465223915s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-905634 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-905634 --driver=kvm2  --container-runtime=crio: (42.434587505s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-894022
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-905634
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-905634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-905634
helpers_test.go:175: Cleaning up "first-894022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-894022
--- PASS: TestMinikubeProfile (83.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-688065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-688065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.617962403s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-688065 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-688065 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-704909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-704909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.783688704s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-704909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-704909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-688065 --alsologtostderr -v=5
E0127 11:09:26.925219   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-704909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-704909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-704909
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-704909: (1.267749401s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-704909
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-704909: (21.164381798s)
--- PASS: TestMountStart/serial/RestartStopped (22.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-704909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-704909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (143.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-751108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-751108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m23.065788186s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (143.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-751108 -- rollout status deployment/busybox: (3.667089016s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-9qkc5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-wrldp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-9qkc5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-wrldp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-9qkc5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-wrldp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-9qkc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-9qkc5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-wrldp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-751108 -- exec busybox-58667487b6-wrldp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-751108 -v 3 --alsologtostderr
E0127 11:12:30.002478   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:12:34.555852   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-751108 -v 3 --alsologtostderr: (49.110472518s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-751108 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp testdata/cp-test.txt multinode-751108:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3966074436/001/cp-test_multinode-751108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108:/home/docker/cp-test.txt multinode-751108-m02:/home/docker/cp-test_multinode-751108_multinode-751108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m02 "sudo cat /home/docker/cp-test_multinode-751108_multinode-751108-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108:/home/docker/cp-test.txt multinode-751108-m03:/home/docker/cp-test_multinode-751108_multinode-751108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m03 "sudo cat /home/docker/cp-test_multinode-751108_multinode-751108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp testdata/cp-test.txt multinode-751108-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3966074436/001/cp-test_multinode-751108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108-m02:/home/docker/cp-test.txt multinode-751108:/home/docker/cp-test_multinode-751108-m02_multinode-751108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108 "sudo cat /home/docker/cp-test_multinode-751108-m02_multinode-751108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108-m02:/home/docker/cp-test.txt multinode-751108-m03:/home/docker/cp-test_multinode-751108-m02_multinode-751108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m03 "sudo cat /home/docker/cp-test_multinode-751108-m02_multinode-751108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp testdata/cp-test.txt multinode-751108-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3966074436/001/cp-test_multinode-751108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108-m03:/home/docker/cp-test.txt multinode-751108:/home/docker/cp-test_multinode-751108-m03_multinode-751108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108 "sudo cat /home/docker/cp-test_multinode-751108-m03_multinode-751108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 cp multinode-751108-m03:/home/docker/cp-test.txt multinode-751108-m02:/home/docker/cp-test_multinode-751108-m03_multinode-751108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 ssh -n multinode-751108-m02 "sudo cat /home/docker/cp-test_multinode-751108-m03_multinode-751108-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-751108 node stop m03: (1.371372134s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-751108 status: exit status 7 (401.414587ms)

                                                
                                                
-- stdout --
	multinode-751108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-751108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-751108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr: exit status 7 (407.018841ms)

                                                
                                                
-- stdout --
	multinode-751108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-751108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-751108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:13:20.379339   51929 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:13:20.379426   51929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:13:20.379434   51929 out.go:358] Setting ErrFile to fd 2...
	I0127 11:13:20.379438   51929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:13:20.379600   51929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:13:20.379800   51929 out.go:352] Setting JSON to false
	I0127 11:13:20.379828   51929 mustload.go:65] Loading cluster: multinode-751108
	I0127 11:13:20.379936   51929 notify.go:220] Checking for updates...
	I0127 11:13:20.380200   51929 config.go:182] Loaded profile config "multinode-751108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:13:20.380217   51929 status.go:174] checking status of multinode-751108 ...
	I0127 11:13:20.380631   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.380664   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.395790   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0127 11:13:20.396210   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.396796   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.396830   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.397168   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.397364   51929 main.go:141] libmachine: (multinode-751108) Calling .GetState
	I0127 11:13:20.399145   51929 status.go:371] multinode-751108 host status = "Running" (err=<nil>)
	I0127 11:13:20.399171   51929 host.go:66] Checking if "multinode-751108" exists ...
	I0127 11:13:20.399620   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.399668   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.414058   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0127 11:13:20.414421   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.414832   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.414855   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.415192   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.415395   51929 main.go:141] libmachine: (multinode-751108) Calling .GetIP
	I0127 11:13:20.418387   51929 main.go:141] libmachine: (multinode-751108) DBG | domain multinode-751108 has defined MAC address 52:54:00:4d:18:e8 in network mk-multinode-751108
	I0127 11:13:20.418846   51929 main.go:141] libmachine: (multinode-751108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:18:e8", ip: ""} in network mk-multinode-751108: {Iface:virbr1 ExpiryTime:2025-01-27 12:10:06 +0000 UTC Type:0 Mac:52:54:00:4d:18:e8 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-751108 Clientid:01:52:54:00:4d:18:e8}
	I0127 11:13:20.418879   51929 main.go:141] libmachine: (multinode-751108) DBG | domain multinode-751108 has defined IP address 192.168.39.185 and MAC address 52:54:00:4d:18:e8 in network mk-multinode-751108
	I0127 11:13:20.418953   51929 host.go:66] Checking if "multinode-751108" exists ...
	I0127 11:13:20.419281   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.419313   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.435985   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0127 11:13:20.436351   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.436833   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.436860   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.437140   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.437353   51929 main.go:141] libmachine: (multinode-751108) Calling .DriverName
	I0127 11:13:20.437542   51929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:13:20.437567   51929 main.go:141] libmachine: (multinode-751108) Calling .GetSSHHostname
	I0127 11:13:20.440381   51929 main.go:141] libmachine: (multinode-751108) DBG | domain multinode-751108 has defined MAC address 52:54:00:4d:18:e8 in network mk-multinode-751108
	I0127 11:13:20.440834   51929 main.go:141] libmachine: (multinode-751108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:18:e8", ip: ""} in network mk-multinode-751108: {Iface:virbr1 ExpiryTime:2025-01-27 12:10:06 +0000 UTC Type:0 Mac:52:54:00:4d:18:e8 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-751108 Clientid:01:52:54:00:4d:18:e8}
	I0127 11:13:20.440857   51929 main.go:141] libmachine: (multinode-751108) DBG | domain multinode-751108 has defined IP address 192.168.39.185 and MAC address 52:54:00:4d:18:e8 in network mk-multinode-751108
	I0127 11:13:20.441000   51929 main.go:141] libmachine: (multinode-751108) Calling .GetSSHPort
	I0127 11:13:20.441161   51929 main.go:141] libmachine: (multinode-751108) Calling .GetSSHKeyPath
	I0127 11:13:20.441282   51929 main.go:141] libmachine: (multinode-751108) Calling .GetSSHUsername
	I0127 11:13:20.441396   51929 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/multinode-751108/id_rsa Username:docker}
	I0127 11:13:20.523180   51929 ssh_runner.go:195] Run: systemctl --version
	I0127 11:13:20.529099   51929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:13:20.542902   51929 kubeconfig.go:125] found "multinode-751108" server: "https://192.168.39.185:8443"
	I0127 11:13:20.542938   51929 api_server.go:166] Checking apiserver status ...
	I0127 11:13:20.542977   51929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:13:20.557725   51929 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup
	W0127 11:13:20.566569   51929 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:13:20.566618   51929 ssh_runner.go:195] Run: ls
	I0127 11:13:20.570449   51929 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0127 11:13:20.574566   51929 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0127 11:13:20.574586   51929 status.go:463] multinode-751108 apiserver status = Running (err=<nil>)
	I0127 11:13:20.574600   51929 status.go:176] multinode-751108 status: &{Name:multinode-751108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:13:20.574616   51929 status.go:174] checking status of multinode-751108-m02 ...
	I0127 11:13:20.574884   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.574918   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.589871   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0127 11:13:20.590358   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.590834   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.590853   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.591242   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.591417   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .GetState
	I0127 11:13:20.592993   51929 status.go:371] multinode-751108-m02 host status = "Running" (err=<nil>)
	I0127 11:13:20.593010   51929 host.go:66] Checking if "multinode-751108-m02" exists ...
	I0127 11:13:20.593320   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.593354   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.608006   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0127 11:13:20.608366   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.608841   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.608869   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.609167   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.609347   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .GetIP
	I0127 11:13:20.612290   51929 main.go:141] libmachine: (multinode-751108-m02) DBG | domain multinode-751108-m02 has defined MAC address 52:54:00:f1:23:65 in network mk-multinode-751108
	I0127 11:13:20.612714   51929 main.go:141] libmachine: (multinode-751108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:23:65", ip: ""} in network mk-multinode-751108: {Iface:virbr1 ExpiryTime:2025-01-27 12:11:35 +0000 UTC Type:0 Mac:52:54:00:f1:23:65 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-751108-m02 Clientid:01:52:54:00:f1:23:65}
	I0127 11:13:20.612747   51929 main.go:141] libmachine: (multinode-751108-m02) DBG | domain multinode-751108-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:f1:23:65 in network mk-multinode-751108
	I0127 11:13:20.612903   51929 host.go:66] Checking if "multinode-751108-m02" exists ...
	I0127 11:13:20.613284   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.613324   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.628178   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0127 11:13:20.628613   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.629088   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.629107   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.629379   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.629506   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .DriverName
	I0127 11:13:20.629656   51929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:13:20.629678   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .GetSSHHostname
	I0127 11:13:20.632300   51929 main.go:141] libmachine: (multinode-751108-m02) DBG | domain multinode-751108-m02 has defined MAC address 52:54:00:f1:23:65 in network mk-multinode-751108
	I0127 11:13:20.632644   51929 main.go:141] libmachine: (multinode-751108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:23:65", ip: ""} in network mk-multinode-751108: {Iface:virbr1 ExpiryTime:2025-01-27 12:11:35 +0000 UTC Type:0 Mac:52:54:00:f1:23:65 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-751108-m02 Clientid:01:52:54:00:f1:23:65}
	I0127 11:13:20.632676   51929 main.go:141] libmachine: (multinode-751108-m02) DBG | domain multinode-751108-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:f1:23:65 in network mk-multinode-751108
	I0127 11:13:20.632827   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .GetSSHPort
	I0127 11:13:20.633000   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .GetSSHKeyPath
	I0127 11:13:20.633156   51929 main.go:141] libmachine: (multinode-751108-m02) Calling .GetSSHUsername
	I0127 11:13:20.633340   51929 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-18835/.minikube/machines/multinode-751108-m02/id_rsa Username:docker}
	I0127 11:13:20.710277   51929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:13:20.722795   51929 status.go:176] multinode-751108-m02 status: &{Name:multinode-751108-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:13:20.722823   51929 status.go:174] checking status of multinode-751108-m03 ...
	I0127 11:13:20.723097   51929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:13:20.723138   51929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:13:20.738127   51929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37507
	I0127 11:13:20.738583   51929 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:13:20.739081   51929 main.go:141] libmachine: Using API Version  1
	I0127 11:13:20.739103   51929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:13:20.739424   51929 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:13:20.739577   51929 main.go:141] libmachine: (multinode-751108-m03) Calling .GetState
	I0127 11:13:20.740957   51929 status.go:371] multinode-751108-m03 host status = "Stopped" (err=<nil>)
	I0127 11:13:20.740972   51929 status.go:384] host is not running, skipping remaining checks
	I0127 11:13:20.740979   51929 status.go:176] multinode-751108-m03 status: &{Name:multinode-751108-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-751108 node start m03 -v=7 --alsologtostderr: (41.613375554s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-751108
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-751108
E0127 11:14:26.926284   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-751108: (3m3.013199775s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-751108 --wait=true -v=8 --alsologtostderr
E0127 11:17:34.555395   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:19:26.924948   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-751108 --wait=true -v=8 --alsologtostderr: (2m24.680056568s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-751108
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-751108 node delete m03: (2.159404744s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 stop
E0127 11:20:37.620717   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:22:34.562437   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-751108 stop: (3m1.86699561s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-751108 status: exit status 7 (81.283238ms)

                                                
                                                
-- stdout --
	multinode-751108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-751108-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr: exit status 7 (80.528168ms)

                                                
                                                
-- stdout --
	multinode-751108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-751108-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:22:35.415811   55371 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:22:35.416350   55371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:22:35.416366   55371 out.go:358] Setting ErrFile to fd 2...
	I0127 11:22:35.416374   55371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:22:35.416797   55371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:22:35.417088   55371 out.go:352] Setting JSON to false
	I0127 11:22:35.417123   55371 mustload.go:65] Loading cluster: multinode-751108
	I0127 11:22:35.417201   55371 notify.go:220] Checking for updates...
	I0127 11:22:35.417953   55371 config.go:182] Loaded profile config "multinode-751108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:22:35.417982   55371 status.go:174] checking status of multinode-751108 ...
	I0127 11:22:35.418449   55371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:22:35.418485   55371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:22:35.432680   55371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0127 11:22:35.433097   55371 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:22:35.433627   55371 main.go:141] libmachine: Using API Version  1
	I0127 11:22:35.433641   55371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:22:35.433945   55371 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:22:35.434181   55371 main.go:141] libmachine: (multinode-751108) Calling .GetState
	I0127 11:22:35.435619   55371 status.go:371] multinode-751108 host status = "Stopped" (err=<nil>)
	I0127 11:22:35.435634   55371 status.go:384] host is not running, skipping remaining checks
	I0127 11:22:35.435640   55371 status.go:176] multinode-751108 status: &{Name:multinode-751108 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:22:35.435665   55371 status.go:174] checking status of multinode-751108-m02 ...
	I0127 11:22:35.435990   55371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 11:22:35.436029   55371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 11:22:35.450081   55371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0127 11:22:35.450488   55371 main.go:141] libmachine: () Calling .GetVersion
	I0127 11:22:35.450879   55371 main.go:141] libmachine: Using API Version  1
	I0127 11:22:35.450901   55371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 11:22:35.451292   55371 main.go:141] libmachine: () Calling .GetMachineName
	I0127 11:22:35.451456   55371 main.go:141] libmachine: (multinode-751108-m02) Calling .GetState
	I0127 11:22:35.453019   55371 status.go:371] multinode-751108-m02 host status = "Stopped" (err=<nil>)
	I0127 11:22:35.453041   55371 status.go:384] host is not running, skipping remaining checks
	I0127 11:22:35.453048   55371 status.go:176] multinode-751108-m02 status: &{Name:multinode-751108-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-751108 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-751108 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m34.683823878s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-751108 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-751108
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-751108-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-751108-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.63205ms)

                                                
                                                
-- stdout --
	* [multinode-751108-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-751108-m02' is duplicated with machine name 'multinode-751108-m02' in profile 'multinode-751108'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-751108-m03 --driver=kvm2  --container-runtime=crio
E0127 11:24:26.925975   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-751108-m03 --driver=kvm2  --container-runtime=crio: (42.189026601s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-751108
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-751108: exit status 80 (199.890987ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-751108 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-751108-m03 already exists in multinode-751108-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-751108-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.28s)

                                                
                                    
x
+
TestScheduledStopUnix (113.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-794344 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-794344 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.338316552s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794344 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-794344 -n scheduled-stop-794344
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794344 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 11:30:20.074316   26072 retry.go:31] will retry after 101.899µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.075461   26072 retry.go:31] will retry after 80.931µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.076620   26072 retry.go:31] will retry after 257.131µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.077761   26072 retry.go:31] will retry after 282.628µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.078928   26072 retry.go:31] will retry after 527.034µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.080087   26072 retry.go:31] will retry after 716.453µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.081244   26072 retry.go:31] will retry after 695.038µs: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.082371   26072 retry.go:31] will retry after 1.644827ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.084581   26072 retry.go:31] will retry after 3.023853ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.087722   26072 retry.go:31] will retry after 4.585207ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.092951   26072 retry.go:31] will retry after 7.327349ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.101183   26072 retry.go:31] will retry after 9.396444ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.111410   26072 retry.go:31] will retry after 12.487526ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.124621   26072 retry.go:31] will retry after 22.595124ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
I0127 11:30:20.147877   26072 retry.go:31] will retry after 25.58891ms: open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/scheduled-stop-794344/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794344 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794344 -n scheduled-stop-794344
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-794344
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794344 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-794344
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-794344: exit status 7 (64.153435ms)

                                                
                                                
-- stdout --
	scheduled-stop-794344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794344 -n scheduled-stop-794344
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794344 -n scheduled-stop-794344: exit status 7 (63.196402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-794344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-794344
--- PASS: TestScheduledStopUnix (113.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (186.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3702509201 start -p running-upgrade-968925 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0127 11:32:34.555531   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3702509201 start -p running-upgrade-968925 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m53.952055012s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-968925 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-968925 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.645327788s)
helpers_test.go:175: Cleaning up "running-upgrade-968925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-968925
--- PASS: TestRunningBinaryUpgrade (186.15s)

                                                
                                    
x
+
TestPause/serial/Start (102.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-900843 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-900843 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m42.610636121s)
--- PASS: TestPause/serial/Start (102.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (233.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1894501257 start -p stopped-upgrade-943115 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1894501257 start -p stopped-upgrade-943115 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m30.95593581s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1894501257 -p stopped-upgrade-943115 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1894501257 -p stopped-upgrade-943115 stop: (1m30.823247022s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-943115 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-943115 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.520507616s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (233.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200407 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-200407 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (62.433506ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-200407] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200407 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200407 --driver=kvm2  --container-runtime=crio: (1m10.612907189s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-200407 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-673007 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-673007 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.093396ms)

                                                
                                                
-- stdout --
	* [false-673007] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:35:11.310382   62830 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:35:11.310530   62830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:35:11.310540   62830 out.go:358] Setting ErrFile to fd 2...
	I0127 11:35:11.310545   62830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:35:11.310708   62830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-18835/.minikube/bin
	I0127 11:35:11.311292   62830 out.go:352] Setting JSON to false
	I0127 11:35:11.312282   62830 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8211,"bootTime":1737969500,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 11:35:11.312379   62830 start.go:139] virtualization: kvm guest
	I0127 11:35:11.314845   62830 out.go:177] * [false-673007] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 11:35:11.316104   62830 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:35:11.316132   62830 notify.go:220] Checking for updates...
	I0127 11:35:11.318742   62830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:35:11.320043   62830 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-18835/kubeconfig
	I0127 11:35:11.321302   62830 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-18835/.minikube
	I0127 11:35:11.322767   62830 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 11:35:11.323912   62830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:35:11.325478   62830 config.go:182] Loaded profile config "NoKubernetes-200407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:35:11.325588   62830 config.go:182] Loaded profile config "kubernetes-upgrade-480798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 11:35:11.325690   62830 config.go:182] Loaded profile config "stopped-upgrade-943115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 11:35:11.325785   62830 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:35:11.363112   62830 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 11:35:11.364465   62830 start.go:297] selected driver: kvm2
	I0127 11:35:11.364485   62830 start.go:901] validating driver "kvm2" against <nil>
	I0127 11:35:11.364500   62830 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:35:11.367556   62830 out.go:201] 
	W0127 11:35:11.369050   62830 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 11:35:11.370333   62830 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-673007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-673007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-673007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-673007"

                                                
                                                
----------------------- debugLogs end: false-673007 [took: 3.093193246s] --------------------------------
helpers_test.go:175: Cleaning up "false-673007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-673007
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-943115
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200407 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200407 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.558449343s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-200407 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-200407 status -o json: exit status 2 (237.010126ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-200407","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-200407
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-200407: (1.500717413s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200407 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200407 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.368948588s)
--- PASS: TestNoKubernetes/serial/Start (27.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-200407 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-200407 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.081243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-200407
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-200407: (1.286773154s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200407 --driver=kvm2  --container-runtime=crio
E0127 11:37:17.622605   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200407 --driver=kvm2  --container-runtime=crio: (42.721700579s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-200407 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-200407 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.68455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (140.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-273200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-273200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (2m20.990828392s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (140.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-986409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-986409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (54.925899277s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-407489 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-407489 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m42.726126399s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-273200 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e8c33c2-a723-446c-9bb0-fa0e40936219] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e8c33c2-a723-446c-9bb0-fa0e40936219] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005260497s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-273200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-273200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-273200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-273200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-273200 --alsologtostderr -v=3: (1m31.020265253s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-986409 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a9cbcd7-7d5c-4132-9d76-e99f4cd48cfc] Pending
helpers_test.go:344: "busybox" [6a9cbcd7-7d5c-4132-9d76-e99f4cd48cfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6a9cbcd7-7d5c-4132-9d76-e99f4cd48cfc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0037386s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-986409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-986409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-986409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-986409 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-986409 --alsologtostderr -v=3: (1m31.161008983s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-407489 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c5685b34-9382-4908-a090-8128b51e00a1] Pending
helpers_test.go:344: "busybox" [c5685b34-9382-4908-a090-8128b51e00a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c5685b34-9382-4908-a090-8128b51e00a1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003249109s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-407489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-407489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-407489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-407489 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-407489 --alsologtostderr -v=3: (1m31.149323146s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273200 -n no-preload-273200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273200 -n no-preload-273200: exit status 7 (64.859769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-273200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-986409 -n embed-certs-986409
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-986409 -n embed-certs-986409: exit status 7 (63.320091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-986409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407489 -n default-k8s-diff-port-407489
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407489 -n default-k8s-diff-port-407489: exit status 7 (66.203385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-407489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-570778 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-570778 --alsologtostderr -v=3: (2.292423796s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570778 -n old-k8s-version-570778: exit status 7 (70.360151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-570778 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-929622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-929622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (48.528283467s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m19.19743524s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-929622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-929622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.009168531s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-929622 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-929622 --alsologtostderr -v=3: (11.328209174s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-929622 -n newest-cni-929622
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-929622 -n newest-cni-929622: exit status 7 (75.234351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-929622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-929622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-929622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (34.402771146s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-929622 -n newest-cni-929622
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-673007 "pgrep -a kubelet"
I0127 12:09:07.202928   26072 config.go:182] Loaded profile config "auto-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ndx69" [30219fbe-3eac-4aab-b7c8-d8a3f298ece8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ndx69" [30219fbe-3eac-4aab-b7c8-d8a3f298ece8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.007015657s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-929622 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-929622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-929622 -n newest-cni-929622
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-929622 -n newest-cni-929622: exit status 2 (272.571544ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-929622 -n newest-cni-929622
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-929622 -n newest-cni-929622: exit status 2 (254.138354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-929622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-929622 -n newest-cni-929622
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-929622 -n newest-cni-929622
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0127 12:09:26.925158   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/addons-952541/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (59.509504637s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.887735191s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (100.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m40.928444878s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (100.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (98.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0127 12:10:05.855653   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:05.862086   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:05.873436   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:05.894827   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:05.936259   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:06.017741   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:06.179464   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:06.501192   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:07.143038   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:08.424805   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:10.986906   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:16.108610   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m38.154180431s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (98.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vrs6b" [18258571-936c-47bb-894d-946acf519590] Running
E0127 12:10:26.350970   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/no-preload-273200/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004650925s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-673007 "pgrep -a kubelet"
I0127 12:10:28.874129   26072 config.go:182] Loaded profile config "kindnet-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8nchp" [0b4044b1-3ec4-4421-98d7-e9ea8f1afd34] Pending
helpers_test.go:344: "netcat-5d86dc444-8nchp" [0b4044b1-3ec4-4421-98d7-e9ea8f1afd34] Running
E0127 12:10:37.626296   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/functional-787474/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005543257s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m16.475862628s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bd96p" [9d1f418c-ccc6-4916-98bd-84673850d89c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005375897s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-673007 "pgrep -a kubelet"
I0127 12:11:14.087426   26072 config.go:182] Loaded profile config "calico-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qfd76" [8560e390-0f83-40cc-90fe-2c0c749cf304] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qfd76" [8560e390-0f83-40cc-90fe-2c0c749cf304] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00322701s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-673007 "pgrep -a kubelet"
I0127 12:11:20.351650   26072 config.go:182] Loaded profile config "custom-flannel-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-thbnj" [8c6c77e7-6385-4392-ae17-b5e875d2b3ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-thbnj" [8c6c77e7-6385-4392-ae17-b5e875d2b3ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00603684s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-673007 "pgrep -a kubelet"
I0127 12:11:33.954337   26072 config.go:182] Loaded profile config "enable-default-cni-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-789z8" [1d22b3b8-d7aa-4436-850b-aac16b55ebf0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 12:11:34.571328   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:34.577740   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:34.589055   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:34.610404   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:34.652104   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:34.734125   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:34.895426   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:35.217085   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:35.859326   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:37.140859   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:39.702867   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-789z8" [1d22b3b8-d7aa-4436-850b-aac16b55ebf0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007064009s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0127 12:11:44.824484   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-673007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m20.351905162s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ltslr" [f26ac900-a5ea-4fdd-a025-f7fbae6d2919] Running
E0127 12:12:15.547670   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/default-k8s-diff-port-407489/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004096281s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-673007 "pgrep -a kubelet"
I0127 12:12:19.728606   26072 config.go:182] Loaded profile config "flannel-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2zdgh" [9febdf05-3012-4f5c-a245-6e86c71a6a5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2zdgh" [9febdf05-3012-4f5c-a245-6e86c71a6a5d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0044783s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-673007 "pgrep -a kubelet"
I0127 12:13:03.982022   26072 config.go:182] Loaded profile config "bridge-673007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-673007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bgkn9" [373dc19f-6ad9-4fb8-a9a6-74b60b647801] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 12:13:04.235273   26072 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/old-k8s-version-570778/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-bgkn9" [373dc19f-6ad9-4fb8-a9a6-74b60b647801] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004319284s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-673007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-673007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (34/309)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-952541 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-429764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-429764
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-673007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-673007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-673007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-673007"

                                                
                                                
----------------------- debugLogs end: kubenet-673007 [took: 5.354720055s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-673007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-673007
--- SKIP: TestNetworkPlugins/group/kubenet (5.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-673007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-673007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20319-18835/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 11:35:13 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.178:8443
name: stopped-upgrade-943115
contexts:
- context:
cluster: stopped-upgrade-943115
user: stopped-upgrade-943115
name: stopped-upgrade-943115
current-context: stopped-upgrade-943115
kind: Config
preferences: {}
users:
- name: stopped-upgrade-943115
user:
client-certificate: /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/stopped-upgrade-943115/client.crt
client-key: /home/jenkins/minikube-integration/20319-18835/.minikube/profiles/stopped-upgrade-943115/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-673007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-673007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-673007"

                                                
                                                
----------------------- debugLogs end: cilium-673007 [took: 3.394411685s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-673007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-673007
--- SKIP: TestNetworkPlugins/group/cilium (3.55s)

                                                
                                    
Copied to clipboard